I0630 23:39:16.736244 8 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0630 23:39:16.736435 8 e2e.go:129] Starting e2e run "f39acfdd-f386-43a2-964e-ea79da272e01" on Ginkgo node 1 {"msg":"Test Suite starting","total":294,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593560355 - Will randomize all specs Will run 294 of 5102 specs Jun 30 23:39:16.793: INFO: >>> kubeConfig: /root/.kube/config Jun 30 23:39:16.796: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 30 23:39:16.824: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 30 23:39:16.860: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 30 23:39:16.860: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 30 23:39:16.860: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 30 23:39:16.868: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 30 23:39:16.868: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 30 23:39:16.868: INFO: e2e test version: v1.19.0-beta.1.98+60b800358f7784 Jun 30 23:39:16.869: INFO: kube-apiserver version: v1.18.2 Jun 30 23:39:16.869: INFO: >>> kubeConfig: /root/.kube/config Jun 30 23:39:16.874: INFO: Cluster IP family: ipv4 SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:39:16.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Jun 30 23:39:16.978: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-4c22a3e4-a3f3-4119-bbb0-56f37910b604 STEP: Creating a pod to test consume secrets Jun 30 23:39:16.992: INFO: Waiting up to 5m0s for pod "pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4" in namespace "secrets-9490" to be "Succeeded or Failed" Jun 30 23:39:17.003: INFO: Pod "pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.672085ms Jun 30 23:39:19.080: INFO: Pod "pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088582356s Jun 30 23:39:21.085: INFO: Pod "pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4": Phase="Running", Reason="", readiness=true. Elapsed: 4.093322588s Jun 30 23:39:23.089: INFO: Pod "pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097160711s STEP: Saw pod success Jun 30 23:39:23.089: INFO: Pod "pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4" satisfied condition "Succeeded or Failed" Jun 30 23:39:23.091: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4 container secret-volume-test: STEP: delete the pod Jun 30 23:39:23.152: INFO: Waiting for pod pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4 to disappear Jun 30 23:39:23.158: INFO: Pod pod-secrets-78678c6f-62b4-4a2a-a96f-569d13cd05e4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:39:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9490" for this suite. • [SLOW TEST:6.297 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":1,"skipped":3,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:39:23.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 30 23:39:23.283: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:23.289: INFO: Number of nodes with available pods: 0 Jun 30 23:39:23.289: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:24.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:24.298: INFO: Number of nodes with available pods: 0 Jun 30 23:39:24.298: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:25.346: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:25.350: INFO: Number of nodes with available pods: 0 Jun 30 23:39:25.350: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:26.478: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:26.741: INFO: Number of nodes with available pods: 0 Jun 30 23:39:26.741: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:27.293: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:27.296: INFO: Number of nodes with available pods: 1 Jun 30 23:39:27.296: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:28.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:28.298: INFO: Number of nodes with available pods: 2 Jun 30 23:39:28.298: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 30 23:39:28.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:28.335: INFO: Number of nodes with available pods: 1 Jun 30 23:39:28.335: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:29.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:29.343: INFO: Number of nodes with available pods: 1 Jun 30 23:39:29.343: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:30.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:30.344: INFO: Number of nodes with available pods: 1 Jun 30 23:39:30.344: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:31.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:31.344: INFO: Number of nodes with available pods: 1 Jun 30 23:39:31.344: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:32.341: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:32.344: INFO: Number of nodes with available pods: 1 Jun 30 23:39:32.344: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:33.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:33.346: INFO: Number of nodes with available pods: 1 Jun 30 23:39:33.346: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:34.339: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:34.342: INFO: Number of nodes with available pods: 1 Jun 30 23:39:34.342: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:35.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:35.347: INFO: Number of nodes with available pods: 1 Jun 30 23:39:35.347: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:36.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:36.368: INFO: Number of nodes with available pods: 1 Jun 30 23:39:36.368: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:37.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:37.344: INFO: Number of nodes with available pods: 1 Jun 30 23:39:37.344: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:38.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:38.344: INFO: Number of nodes with available pods: 1 Jun 30 23:39:38.344: INFO: Node latest-worker is running more than one daemon pod Jun 30 23:39:39.340: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 30 23:39:39.343: INFO: Number of nodes with available pods: 2 Jun 30 23:39:39.343: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2292, will wait for the garbage collector to delete the pods Jun 30 23:39:39.406: INFO: Deleting DaemonSet.extensions daemon-set took: 7.807142ms Jun 30 23:39:39.707: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.241334ms Jun 30 23:39:45.311: INFO: Number of nodes with available pods: 0 Jun 30 23:39:45.311: INFO: Number of running nodes: 0, number of available pods: 0 Jun 30 23:39:45.317: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2292/daemonsets","resourceVersion":"17231633"},"items":null} Jun 30 23:39:45.320: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2292/pods","resourceVersion":"17231633"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:39:45.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2292" for this suite. • [SLOW TEST:22.170 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":294,"completed":2,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:39:45.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-45e60448-5754-4192-8c1f-4c14cdb37a98 STEP: Creating a pod to test consume configMaps Jun 30 23:39:45.458: INFO: Waiting up to 5m0s for pod "pod-configmaps-5583af86-d374-4959-84c3-d47764074e25" in namespace "configmap-9350" to be "Succeeded or Failed" Jun 30 23:39:45.461: INFO: Pod "pod-configmaps-5583af86-d374-4959-84c3-d47764074e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92916ms Jun 30 23:39:47.464: INFO: Pod "pod-configmaps-5583af86-d374-4959-84c3-d47764074e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006139285s Jun 30 23:39:49.468: INFO: Pod "pod-configmaps-5583af86-d374-4959-84c3-d47764074e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01052292s STEP: Saw pod success Jun 30 23:39:49.468: INFO: Pod "pod-configmaps-5583af86-d374-4959-84c3-d47764074e25" satisfied condition "Succeeded or Failed" Jun 30 23:39:49.471: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5583af86-d374-4959-84c3-d47764074e25 container configmap-volume-test: STEP: delete the pod Jun 30 23:39:49.490: INFO: Waiting for pod pod-configmaps-5583af86-d374-4959-84c3-d47764074e25 to disappear Jun 30 23:39:49.506: INFO: Pod pod-configmaps-5583af86-d374-4959-84c3-d47764074e25 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:39:49.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9350" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":3,"skipped":34,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:39:49.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:39:49.913: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 30 23:39:54.916: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 30 23:39:54.916: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 30 23:39:54.937: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1041 /apis/apps/v1/namespaces/deployment-1041/deployments/test-cleanup-deployment 33dd588a-fd74-4b89-938d-a4e9799b4aed 17231727 1 2020-06-30 23:39:54 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-06-30 23:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032c8a58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jun 30 23:39:54.964: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-1041 /apis/apps/v1/namespaces/deployment-1041/replicasets/test-cleanup-deployment-6688745694 30e78413-d7f9-4622-9b45-9f16b099803d 17231729 1 2020-06-30 23:39:54 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 33dd588a-fd74-4b89-938d-a4e9799b4aed 0xc00321c747 0xc00321c748}] [] [{kube-controller-manager Update apps/v1 2020-06-30 23:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33dd588a-fd74-4b89-938d-a4e9799b4aed\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00321c7d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 30 23:39:54.964: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 30 23:39:54.964: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1041 /apis/apps/v1/namespaces/deployment-1041/replicasets/test-cleanup-controller f250e11b-92bf-4118-b5df-f60f67b70fd2 17231728 1 2020-06-30 23:39:49 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 33dd588a-fd74-4b89-938d-a4e9799b4aed 0xc00321c637 0xc00321c638}] [] [{e2e.test Update apps/v1 2020-06-30 23:39:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-30 23:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"33dd588a-fd74-4b89-938d-a4e9799b4aed\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00321c6d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jun 30 23:39:55.053: INFO: Pod "test-cleanup-controller-hkxtr" is available: &Pod{ObjectMeta:{test-cleanup-controller-hkxtr test-cleanup-controller- deployment-1041 /api/v1/namespaces/deployment-1041/pods/test-cleanup-controller-hkxtr 0e68f066-3e5c-4657-a736-cb889fd287ce 17231712 0 2020-06-30 23:39:49 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller f250e11b-92bf-4118-b5df-f60f67b70fd2 0xc0032c8ddf 0xc0032c8df0}] [] [{kube-controller-manager Update v1 2020-06-30 23:39:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f250e11b-92bf-4118-b5df-f60f67b70fd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:39:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-78kvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-78kvl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-78kvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:39:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:39:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:39:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:39:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.84,StartTime:2020-06-30 23:39:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:39:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4e108cfbd5165e707205756adaf2e13b82d8d50fed3307d8cb9674c55358afcf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:39:55.053: INFO: Pod "test-cleanup-deployment-6688745694-t869j" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-t869j test-cleanup-deployment-6688745694- deployment-1041 /api/v1/namespaces/deployment-1041/pods/test-cleanup-deployment-6688745694-t869j b9f74255-0960-4675-9a73-7e316646efb0 17231734 0 2020-06-30 23:39:54 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 30e78413-d7f9-4622-9b45-9f16b099803d 0xc0032c8fa7 0xc0032c8fa8}] [] [{kube-controller-manager Update v1 2020-06-30 23:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30e78413-d7f9-4622-9b45-9f16b099803d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-78kvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-78kvl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-78kvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:39:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:39:55.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1041" for this suite. • [SLOW TEST:5.600 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":294,"completed":4,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:39:55.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9740 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9740 I0630 23:39:55.295459 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9740, replica count: 2 I0630 23:39:58.345968 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0630 23:40:01.346227 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 30 23:40:01.346: INFO: Creating new exec pod Jun 30 23:40:06.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9740 execpodwkgsp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jun 30 23:40:09.300: INFO: stderr: "I0630 23:40:09.127322 30 log.go:172] (0xc00003a160) (0xc0007fd0e0) Create stream\nI0630 23:40:09.127392 30 log.go:172] (0xc00003a160) (0xc0007fd0e0) Stream added, broadcasting: 1\nI0630 23:40:09.129713 30 log.go:172] (0xc00003a160) Reply frame received for 1\nI0630 23:40:09.129758 30 log.go:172] (0xc00003a160) (0xc0007148c0) Create stream\nI0630 23:40:09.129777 30 log.go:172] (0xc00003a160) (0xc0007148c0) Stream added, broadcasting: 3\nI0630 23:40:09.130841 30 log.go:172] (0xc00003a160) Reply frame received for 3\nI0630 23:40:09.130895 30 log.go:172] (0xc00003a160) (0xc0007fd860) Create stream\nI0630 23:40:09.130916 30 log.go:172] (0xc00003a160) (0xc0007fd860) Stream added, broadcasting: 5\nI0630 23:40:09.131810 30 log.go:172] (0xc00003a160) Reply frame received for 5\nI0630 23:40:09.272433 30 log.go:172] (0xc00003a160) Data frame received for 5\nI0630 23:40:09.272464 30 log.go:172] (0xc0007fd860) (5) Data frame handling\nI0630 23:40:09.272485 30 log.go:172] (0xc0007fd860) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0630 23:40:09.289854 30 log.go:172] (0xc00003a160) Data frame received for 5\nI0630 23:40:09.289893 30 log.go:172] (0xc0007fd860) (5) Data frame handling\nI0630 23:40:09.289925 30 log.go:172] (0xc0007fd860) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0630 23:40:09.289995 30 log.go:172] (0xc00003a160) Data frame received for 5\nI0630 23:40:09.290021 30 log.go:172] (0xc0007fd860) (5) Data frame handling\nI0630 23:40:09.290280 30 log.go:172] (0xc00003a160) Data frame received for 3\nI0630 23:40:09.290296 30 log.go:172] (0xc0007148c0) (3) Data frame handling\nI0630 23:40:09.291917 30 log.go:172] (0xc00003a160) Data frame received for 1\nI0630 23:40:09.291940 30 log.go:172] (0xc0007fd0e0) (1) Data frame handling\nI0630 23:40:09.291970 30 log.go:172] (0xc0007fd0e0) (1) Data frame sent\nI0630 23:40:09.292026 30 log.go:172] (0xc00003a160) (0xc0007fd0e0) Stream removed, broadcasting: 1\nI0630 23:40:09.292218 30 log.go:172] (0xc00003a160) Go away received\nI0630 23:40:09.292418 30 log.go:172] (0xc00003a160) (0xc0007fd0e0) Stream removed, broadcasting: 1\nI0630 23:40:09.292443 30 log.go:172] (0xc00003a160) (0xc0007148c0) Stream removed, broadcasting: 3\nI0630 23:40:09.292459 30 log.go:172] (0xc00003a160) (0xc0007fd860) Stream removed, broadcasting: 5\n" Jun 30 23:40:09.300: INFO: stdout: "" Jun 30 23:40:09.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9740 execpodwkgsp -- /bin/sh -x -c nc -zv -t -w 2 10.104.110.148 80' Jun 30 23:40:09.531: INFO: stderr: "I0630 23:40:09.432716 61 log.go:172] (0xc000ae71e0) (0xc00087dc20) Create stream\nI0630 23:40:09.432800 61 log.go:172] (0xc000ae71e0) (0xc00087dc20) Stream added, broadcasting: 1\nI0630 23:40:09.440631 61 log.go:172] (0xc000ae71e0) Reply frame received for 1\nI0630 23:40:09.440679 61 log.go:172] (0xc000ae71e0) (0xc000874500) Create stream\nI0630 23:40:09.440708 61 log.go:172] (0xc000ae71e0) (0xc000874500) Stream added, broadcasting: 3\nI0630 23:40:09.442529 61 log.go:172] (0xc000ae71e0) Reply frame received for 3\nI0630 23:40:09.442588 61 log.go:172] (0xc000ae71e0) (0xc000830000) Create stream\nI0630 23:40:09.442740 61 log.go:172] (0xc000ae71e0) (0xc000830000) Stream added, broadcasting: 5\nI0630 23:40:09.443774 61 log.go:172] (0xc000ae71e0) Reply frame received for 5\nI0630 23:40:09.522882 61 log.go:172] (0xc000ae71e0) Data frame received for 5\nI0630 23:40:09.523026 61 log.go:172] (0xc000830000) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.110.148 80\nConnection to 10.104.110.148 80 port [tcp/http] succeeded!\nI0630 23:40:09.523064 61 log.go:172] (0xc000ae71e0) Data frame received for 3\nI0630 23:40:09.523121 61 log.go:172] (0xc000874500) (3) Data frame handling\nI0630 23:40:09.523160 61 log.go:172] (0xc000830000) (5) Data frame sent\nI0630 23:40:09.523184 61 log.go:172] (0xc000ae71e0) Data frame received for 5\nI0630 23:40:09.523215 61 log.go:172] (0xc000830000) (5) Data frame handling\nI0630 23:40:09.524450 61 log.go:172] (0xc000ae71e0) Data frame received for 1\nI0630 23:40:09.524493 61 log.go:172] (0xc00087dc20) (1) Data frame handling\nI0630 23:40:09.524520 61 log.go:172] (0xc00087dc20) (1) Data frame sent\nI0630 23:40:09.524538 61 log.go:172] (0xc000ae71e0) (0xc00087dc20) Stream removed, broadcasting: 1\nI0630 23:40:09.524560 61 log.go:172] (0xc000ae71e0) Go away received\nI0630 23:40:09.525040 61 log.go:172] (0xc000ae71e0) (0xc00087dc20) Stream removed, broadcasting: 1\nI0630 23:40:09.525066 61 log.go:172] (0xc000ae71e0) (0xc000874500) Stream removed, broadcasting: 3\nI0630 23:40:09.525079 61 log.go:172] (0xc000ae71e0) (0xc000830000) Stream removed, broadcasting: 5\n" Jun 30 23:40:09.531: INFO: stdout: "" Jun 30 23:40:09.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9740 execpodwkgsp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31532' Jun 30 23:40:09.749: INFO: stderr: "I0630 23:40:09.670321 84 log.go:172] (0xc00003a0b0) (0xc000690a00) Create stream\nI0630 23:40:09.670392 84 log.go:172] (0xc00003a0b0) (0xc000690a00) Stream added, broadcasting: 1\nI0630 23:40:09.672431 84 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0630 23:40:09.672475 84 log.go:172] (0xc00003a0b0) (0xc00066aaa0) Create stream\nI0630 23:40:09.672488 84 log.go:172] (0xc00003a0b0) (0xc00066aaa0) Stream added, broadcasting: 3\nI0630 23:40:09.674031 84 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0630 23:40:09.674062 84 log.go:172] (0xc00003a0b0) (0xc00066afa0) Create stream\nI0630 23:40:09.674075 84 log.go:172] (0xc00003a0b0) (0xc00066afa0) Stream added, broadcasting: 5\nI0630 23:40:09.675016 84 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0630 23:40:09.739255 84 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0630 23:40:09.739287 84 log.go:172] (0xc00066aaa0) (3) Data frame handling\nI0630 23:40:09.739305 84 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0630 23:40:09.739312 84 log.go:172] (0xc00066afa0) (5) Data frame handling\nI0630 23:40:09.739341 84 log.go:172] (0xc00066afa0) (5) Data frame sent\nI0630 23:40:09.739348 84 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0630 23:40:09.739354 84 log.go:172] (0xc00066afa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31532\nConnection to 172.17.0.13 31532 port [tcp/31532] succeeded!\nI0630 23:40:09.741288 84 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0630 23:40:09.741367 84 log.go:172] (0xc000690a00) (1) Data frame handling\nI0630 23:40:09.741388 84 log.go:172] (0xc000690a00) (1) Data frame sent\nI0630 23:40:09.741407 84 log.go:172] (0xc00003a0b0) (0xc000690a00) Stream removed, broadcasting: 1\nI0630 23:40:09.741424 84 log.go:172] (0xc00003a0b0) Go away received\nI0630 23:40:09.741965 84 log.go:172] (0xc00003a0b0) (0xc000690a00) Stream removed, broadcasting: 1\nI0630 23:40:09.741991 84 log.go:172] (0xc00003a0b0) (0xc00066aaa0) Stream removed, broadcasting: 3\nI0630 23:40:09.742001 84 log.go:172] (0xc00003a0b0) (0xc00066afa0) Stream removed, broadcasting: 5\n" Jun 30 23:40:09.749: INFO: stdout: "" Jun 30 23:40:09.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9740 execpodwkgsp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31532' Jun 30 23:40:09.972: INFO: stderr: "I0630 23:40:09.880285 104 log.go:172] (0xc0000e8370) (0xc00043a140) Create stream\nI0630 23:40:09.880354 104 log.go:172] (0xc0000e8370) (0xc00043a140) Stream added, broadcasting: 1\nI0630 23:40:09.882812 104 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0630 23:40:09.882854 104 log.go:172] (0xc0000e8370) (0xc0000f2e60) Create stream\nI0630 23:40:09.882867 104 log.go:172] (0xc0000e8370) (0xc0000f2e60) Stream added, broadcasting: 3\nI0630 23:40:09.883670 104 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0630 23:40:09.883713 104 log.go:172] (0xc0000e8370) (0xc00014f400) Create stream\nI0630 23:40:09.883728 104 log.go:172] (0xc0000e8370) (0xc00014f400) Stream added, broadcasting: 5\nI0630 23:40:09.884513 104 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0630 23:40:09.964123 104 log.go:172] (0xc0000e8370) Data frame received for 3\nI0630 23:40:09.964179 104 log.go:172] (0xc0000f2e60) (3) Data frame handling\nI0630 23:40:09.964205 104 log.go:172] (0xc0000e8370) Data frame received for 5\nI0630 23:40:09.964221 104 log.go:172] (0xc00014f400) (5) Data frame handling\nI0630 23:40:09.964231 104 log.go:172] (0xc00014f400) (5) Data frame sent\nI0630 23:40:09.964241 104 log.go:172] (0xc0000e8370) Data frame received for 5\nI0630 23:40:09.964248 104 log.go:172] (0xc00014f400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31532\nConnection to 172.17.0.12 31532 port [tcp/31532] succeeded!\nI0630 23:40:09.965967 104 log.go:172] (0xc0000e8370) Data frame received for 1\nI0630 23:40:09.965991 104 log.go:172] (0xc00043a140) (1) Data frame handling\nI0630 23:40:09.966004 104 log.go:172] (0xc00043a140) (1) Data frame sent\nI0630 23:40:09.966018 104 log.go:172] (0xc0000e8370) (0xc00043a140) Stream removed, broadcasting: 1\nI0630 23:40:09.966096 104 log.go:172] (0xc0000e8370) Go away received\nI0630 23:40:09.966339 104 log.go:172] (0xc0000e8370) (0xc00043a140) Stream removed, broadcasting: 1\nI0630 23:40:09.966354 104 log.go:172] (0xc0000e8370) (0xc0000f2e60) Stream removed, broadcasting: 3\nI0630 23:40:09.966360 104 log.go:172] (0xc0000e8370) (0xc00014f400) Stream removed, broadcasting: 5\n" Jun 30 23:40:09.972: INFO: stdout: "" Jun 30 23:40:09.972: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:40:10.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9740" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:14.947 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":294,"completed":5,"skipped":58,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:40:10.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Jun 30 23:40:10.131: INFO: Waiting up to 5m0s for pod "var-expansion-8715e614-557a-4f64-992c-de948d5507ef" in namespace "var-expansion-1283" to be "Succeeded or Failed" Jun 30 23:40:10.135: INFO: Pod "var-expansion-8715e614-557a-4f64-992c-de948d5507ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74255ms Jun 30 23:40:12.219: INFO: Pod "var-expansion-8715e614-557a-4f64-992c-de948d5507ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087469215s Jun 30 23:40:14.223: INFO: Pod "var-expansion-8715e614-557a-4f64-992c-de948d5507ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09119337s STEP: Saw pod success Jun 30 23:40:14.223: INFO: Pod "var-expansion-8715e614-557a-4f64-992c-de948d5507ef" satisfied condition "Succeeded or Failed" Jun 30 23:40:14.225: INFO: Trying to get logs from node latest-worker2 pod var-expansion-8715e614-557a-4f64-992c-de948d5507ef container dapi-container: STEP: delete the pod Jun 30 23:40:14.323: INFO: Waiting for pod var-expansion-8715e614-557a-4f64-992c-de948d5507ef to disappear Jun 30 23:40:14.339: INFO: Pod var-expansion-8715e614-557a-4f64-992c-de948d5507ef no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:40:14.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1283" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":294,"completed":6,"skipped":58,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:40:14.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2512 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-2512 Jun 30 23:40:14.507: INFO: Found 0 stateful pods, waiting for 1 Jun 30 23:40:24.511: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jun 30 23:40:24.569: INFO: Deleting all statefulset in ns statefulset-2512 Jun 30 23:40:24.586: INFO: Scaling statefulset ss to 0 Jun 30 23:40:44.647: INFO: Waiting for statefulset status.replicas updated to 0 Jun 30 23:40:44.650: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:40:44.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2512" for this suite. • [SLOW TEST:30.315 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":294,"completed":7,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:40:44.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5837 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5837 STEP: creating replication controller externalsvc in namespace services-5837 I0630 23:40:44.941407 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5837, replica count: 2 I0630 23:40:47.991854 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0630 23:40:50.992144 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jun 30 23:40:51.090: INFO: Creating new exec pod Jun 30 23:40:55.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5837 execpodbngx5 -- /bin/sh -x -c nslookup nodeport-service' Jun 30 23:40:55.513: INFO: stderr: "I0630 23:40:55.302465 124 log.go:172] (0xc00099ad10) (0xc00085d220) Create stream\nI0630 23:40:55.302561 124 log.go:172] (0xc00099ad10) (0xc00085d220) Stream added, broadcasting: 1\nI0630 23:40:55.305956 124 log.go:172] (0xc00099ad10) Reply frame received for 1\nI0630 23:40:55.305999 124 log.go:172] (0xc00099ad10) (0xc0008420a0) Create stream\nI0630 23:40:55.306012 124 log.go:172] (0xc00099ad10) (0xc0008420a0) Stream added, broadcasting: 3\nI0630 23:40:55.307158 124 log.go:172] (0xc00099ad10) Reply frame received for 3\nI0630 23:40:55.307191 124 log.go:172] (0xc00099ad10) (0xc000843180) Create stream\nI0630 23:40:55.307203 124 log.go:172] (0xc00099ad10) (0xc000843180) Stream added, broadcasting: 5\nI0630 23:40:55.308334 124 log.go:172] (0xc00099ad10) Reply frame received for 5\nI0630 23:40:55.377534 124 log.go:172] (0xc00099ad10) Data frame received for 5\nI0630 23:40:55.377588 124 log.go:172] (0xc000843180) (5) Data frame handling\nI0630 23:40:55.377622 124 log.go:172] (0xc000843180) (5) Data frame sent\n+ nslookup nodeport-service\nI0630 23:40:55.503406 124 log.go:172] (0xc00099ad10) Data frame received for 3\nI0630 23:40:55.503437 124 log.go:172] (0xc0008420a0) (3) Data frame handling\nI0630 23:40:55.503474 124 log.go:172] (0xc0008420a0) (3) Data frame sent\nI0630 23:40:55.504522 124 log.go:172] (0xc00099ad10) Data frame received for 3\nI0630 23:40:55.504541 124 log.go:172] (0xc0008420a0) (3) Data frame handling\nI0630 23:40:55.504552 124 log.go:172] (0xc0008420a0) (3) Data frame sent\nI0630 23:40:55.505066 124 log.go:172] (0xc00099ad10) Data frame received for 3\nI0630 23:40:55.505080 124 log.go:172] (0xc0008420a0) (3) Data frame handling\nI0630 23:40:55.505476 124 log.go:172] (0xc00099ad10) Data frame received for 5\nI0630 23:40:55.505491 124 log.go:172] (0xc000843180) (5) Data frame handling\nI0630 23:40:55.507228 124 log.go:172] (0xc00099ad10) Data frame received for 1\nI0630 23:40:55.507239 124 log.go:172] (0xc00085d220) (1) Data frame handling\nI0630 23:40:55.507249 124 log.go:172] (0xc00085d220) (1) Data frame sent\nI0630 23:40:55.507501 124 log.go:172] (0xc00099ad10) (0xc00085d220) Stream removed, broadcasting: 1\nI0630 23:40:55.507617 124 log.go:172] (0xc00099ad10) Go away received\nI0630 23:40:55.507798 124 log.go:172] (0xc00099ad10) (0xc00085d220) Stream removed, broadcasting: 1\nI0630 23:40:55.507810 124 log.go:172] (0xc00099ad10) (0xc0008420a0) Stream removed, broadcasting: 3\nI0630 23:40:55.507816 124 log.go:172] (0xc00099ad10) (0xc000843180) Stream removed, broadcasting: 5\n" Jun 30 23:40:55.514: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5837.svc.cluster.local\tcanonical name = externalsvc.services-5837.svc.cluster.local.\nName:\texternalsvc.services-5837.svc.cluster.local\nAddress: 10.98.55.232\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5837, will wait for the garbage collector to delete the pods Jun 30 23:40:55.587: INFO: Deleting ReplicationController externalsvc took: 6.608423ms Jun 30 23:40:55.687: INFO: Terminating ReplicationController externalsvc pods took: 100.199855ms Jun 30 23:41:05.419: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:41:05.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5837" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:20.774 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":294,"completed":8,"skipped":94,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:41:05.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 30 23:41:05.548: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7" in namespace "downward-api-3462" to be "Succeeded or Failed" Jun 30 23:41:05.567: INFO: Pod "downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.388489ms Jun 30 23:41:07.571: INFO: Pod "downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02372042s Jun 30 23:41:09.577: INFO: Pod "downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.028972739s Jun 30 23:41:11.591: INFO: Pod "downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042990423s STEP: Saw pod success Jun 30 23:41:11.591: INFO: Pod "downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7" satisfied condition "Succeeded or Failed" Jun 30 23:41:11.593: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7 container client-container: STEP: delete the pod Jun 30 23:41:11.632: INFO: Waiting for pod downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7 to disappear Jun 30 23:41:11.641: INFO: Pod downwardapi-volume-68ab82e5-6371-4b08-bf3e-ab94fa9734d7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:41:11.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3462" for this suite. • [SLOW TEST:6.199 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":294,"completed":9,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:41:11.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 30 23:41:11.787: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7586 /api/v1/namespaces/watch-7586/configmaps/e2e-watch-test-resource-version c3235a68-239e-439c-9c55-a13eb4ea855a 17232302 0 2020-06-30 23:41:11 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-30 23:41:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 30 23:41:11.788: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7586 /api/v1/namespaces/watch-7586/configmaps/e2e-watch-test-resource-version c3235a68-239e-439c-9c55-a13eb4ea855a 17232303 0 2020-06-30 23:41:11 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-06-30 23:41:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:41:11.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7586" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":294,"completed":10,"skipped":124,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:41:11.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 30 23:41:11.915: INFO: Waiting up to 5m0s for pod "pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4" in namespace "emptydir-2806" to be "Succeeded or Failed" Jun 30 23:41:11.917: INFO: Pod "pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241795ms Jun 30 23:41:13.922: INFO: Pod "pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006883201s Jun 30 23:41:15.926: INFO: Pod "pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010856455s STEP: Saw pod success Jun 30 23:41:15.926: INFO: Pod "pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4" satisfied condition "Succeeded or Failed" Jun 30 23:41:15.929: INFO: Trying to get logs from node latest-worker pod pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4 container test-container: STEP: delete the pod Jun 30 23:41:16.112: INFO: Waiting for pod pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4 to disappear Jun 30 23:41:16.196: INFO: Pod pod-a1ae52c5-85a5-429a-9446-ca9bf993fab4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:41:16.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2806" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":11,"skipped":128,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:41:16.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-57b615e4-7edf-4103-a8e2-a4b7ac19986d STEP: Creating a pod to test consume configMaps Jun 30 23:41:16.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0" in namespace "configmap-5531" to be "Succeeded or Failed" Jun 30 23:41:16.333: INFO: Pod "pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.232227ms Jun 30 23:41:18.382: INFO: Pod "pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061075688s Jun 30 23:41:20.386: INFO: Pod "pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065498552s STEP: Saw pod success Jun 30 23:41:20.386: INFO: Pod "pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0" satisfied condition "Succeeded or Failed" Jun 30 23:41:20.389: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0 container configmap-volume-test: STEP: delete the pod Jun 30 23:41:20.438: INFO: Waiting for pod pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0 to disappear Jun 30 23:41:20.443: INFO: Pod pod-configmaps-8371e7da-a2d0-464a-9115-2051c24805a0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:41:20.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5531" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":12,"skipped":148,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:41:20.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8831 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 30 23:41:20.570: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 30 23:41:20.669: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:41:22.824: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:41:24.723: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:41:26.673: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:41:28.673: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:41:30.674: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:41:32.674: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:41:34.673: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:41:36.675: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:41:38.674: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:41:40.674: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 30 23:41:40.681: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 30 23:41:46.744: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.42 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8831 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 30 23:41:46.744: INFO: >>> kubeConfig: /root/.kube/config I0630 23:41:46.781702 8 log.go:172] (0xc003546370) (0xc002767ae0) Create stream I0630 23:41:46.781735 8 log.go:172] (0xc003546370) (0xc002767ae0) Stream added, broadcasting: 1 I0630 23:41:46.784241 8 log.go:172] (0xc003546370) Reply frame received for 1 I0630 23:41:46.784277 8 log.go:172] (0xc003546370) (0xc002767b80) Create stream I0630 23:41:46.784284 8 log.go:172] (0xc003546370) (0xc002767b80) Stream added, broadcasting: 3 I0630 23:41:46.785405 8 log.go:172] (0xc003546370) Reply frame received for 3 I0630 23:41:46.785462 8 log.go:172] (0xc003546370) (0xc0026dc6e0) Create stream I0630 23:41:46.785476 8 log.go:172] (0xc003546370) (0xc0026dc6e0) Stream added, broadcasting: 5 I0630 23:41:46.786451 8 log.go:172] (0xc003546370) Reply frame received for 5 I0630 23:41:47.886636 8 log.go:172] (0xc003546370) Data frame received for 3 I0630 23:41:47.886750 8 log.go:172] (0xc002767b80) (3) Data frame handling I0630 23:41:47.886804 8 log.go:172] (0xc002767b80) (3) Data frame sent I0630 23:41:47.887131 8 log.go:172] (0xc003546370) Data frame received for 3 I0630 23:41:47.887184 8 log.go:172] (0xc002767b80) (3) Data frame handling I0630 23:41:47.887216 8 log.go:172] (0xc003546370) Data frame received for 5 I0630 23:41:47.887231 8 log.go:172] (0xc0026dc6e0) (5) Data frame handling I0630 23:41:47.890055 8 log.go:172] (0xc003546370) Data frame received for 1 I0630 23:41:47.890098 8 log.go:172] (0xc002767ae0) (1) Data frame handling I0630 23:41:47.890135 8 log.go:172] (0xc002767ae0) (1) Data frame sent I0630 23:41:47.890178 8 log.go:172] (0xc003546370) (0xc002767ae0) Stream removed, broadcasting: 1 I0630 23:41:47.890195 8 log.go:172] (0xc003546370) Go away received I0630 23:41:47.890658 8 log.go:172] (0xc003546370) (0xc002767ae0) Stream removed, broadcasting: 1 I0630 23:41:47.890698 8 log.go:172] (0xc003546370) (0xc002767b80) Stream removed, broadcasting: 3 I0630 23:41:47.890731 8 log.go:172] (0xc003546370) (0xc0026dc6e0) Stream removed, broadcasting: 5 Jun 30 23:41:47.890: INFO: Found all expected endpoints: [netserver-0] Jun 30 23:41:47.894: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.91 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8831 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 30 23:41:47.894: INFO: >>> kubeConfig: /root/.kube/config I0630 23:41:47.929313 8 log.go:172] (0xc00182a580) (0xc002446a00) Create stream I0630 23:41:47.929346 8 log.go:172] (0xc00182a580) (0xc002446a00) Stream added, broadcasting: 1 I0630 23:41:47.931980 8 log.go:172] (0xc00182a580) Reply frame received for 1 I0630 23:41:47.932029 8 log.go:172] (0xc00182a580) (0xc0026dc780) Create stream I0630 23:41:47.932057 8 log.go:172] (0xc00182a580) (0xc0026dc780) Stream added, broadcasting: 3 I0630 23:41:47.933636 8 log.go:172] (0xc00182a580) Reply frame received for 3 I0630 23:41:47.933661 8 log.go:172] (0xc00182a580) (0xc0026dc820) Create stream I0630 23:41:47.933668 8 log.go:172] (0xc00182a580) (0xc0026dc820) Stream added, broadcasting: 5 I0630 23:41:47.934671 8 log.go:172] (0xc00182a580) Reply frame received for 5 I0630 23:41:49.003078 8 log.go:172] (0xc00182a580) Data frame received for 5 I0630 23:41:49.003139 8 log.go:172] (0xc0026dc820) (5) Data frame handling I0630 23:41:49.003168 8 log.go:172] (0xc00182a580) Data frame received for 3 I0630 23:41:49.003183 8 log.go:172] (0xc0026dc780) (3) Data frame handling I0630 23:41:49.003198 8 log.go:172] (0xc0026dc780) (3) Data frame sent I0630 23:41:49.003212 8 log.go:172] (0xc00182a580) Data frame received for 3 I0630 23:41:49.003224 8 log.go:172] (0xc0026dc780) (3) Data frame handling I0630 23:41:49.006063 8 log.go:172] (0xc00182a580) Data frame received for 1 I0630 23:41:49.006099 8 log.go:172] (0xc002446a00) (1) Data frame handling I0630 23:41:49.006128 8 log.go:172] (0xc002446a00) (1) Data frame sent I0630 23:41:49.006160 8 log.go:172] (0xc00182a580) (0xc002446a00) Stream removed, broadcasting: 1 I0630 23:41:49.006209 8 log.go:172] (0xc00182a580) Go away received I0630 23:41:49.006320 8 log.go:172] (0xc00182a580) (0xc002446a00) Stream removed, broadcasting: 1 I0630 23:41:49.006358 8 log.go:172] (0xc00182a580) (0xc0026dc780) Stream removed, broadcasting: 3 I0630 23:41:49.006372 8 log.go:172] (0xc00182a580) (0xc0026dc820) Stream removed, broadcasting: 5 Jun 30 23:41:49.006: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:41:49.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8831" for this suite. • [SLOW TEST:28.561 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":13,"skipped":160,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:41:49.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jun 30 23:41:49.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-781' Jun 30 23:41:50.538: INFO: stderr: "" Jun 30 23:41:50.538: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jun 30 23:41:51.542: INFO: Selector matched 1 pods for map[app:agnhost] Jun 30 23:41:51.542: INFO: Found 0 / 1 Jun 30 23:41:52.542: INFO: Selector matched 1 pods for map[app:agnhost] Jun 30 23:41:52.542: INFO: Found 0 / 1 Jun 30 23:41:53.542: INFO: Selector matched 1 pods for map[app:agnhost] Jun 30 23:41:53.542: INFO: Found 0 / 1 Jun 30 23:41:54.646: INFO: Selector matched 1 pods for map[app:agnhost] Jun 30 23:41:54.646: INFO: Found 1 / 1 Jun 30 23:41:54.646: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 30 23:41:54.837: INFO: Selector matched 1 pods for map[app:agnhost] Jun 30 23:41:54.837: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 30 23:41:54.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-dpxzp --namespace=kubectl-781 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 30 23:41:54.978: INFO: stderr: "" Jun 30 23:41:54.978: INFO: stdout: "pod/agnhost-master-dpxzp patched\n" STEP: checking annotations Jun 30 23:41:55.034: INFO: Selector matched 1 pods for map[app:agnhost] Jun 30 23:41:55.034: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:41:55.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-781" for this suite. • [SLOW TEST:6.028 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1473 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":294,"completed":14,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:41:55.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:41:55.416: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 30 23:41:58.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1698 create -f -' Jun 30 23:42:02.577: INFO: stderr: "" Jun 30 23:42:02.577: INFO: stdout: "e2e-test-crd-publish-openapi-3178-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 30 23:42:02.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1698 delete e2e-test-crd-publish-openapi-3178-crds test-cr' Jun 30 23:42:02.701: INFO: stderr: "" Jun 30 23:42:02.701: INFO: stdout: "e2e-test-crd-publish-openapi-3178-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jun 30 23:42:02.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1698 apply -f -' Jun 30 23:42:02.968: INFO: stderr: "" Jun 30 23:42:02.969: INFO: stdout: "e2e-test-crd-publish-openapi-3178-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jun 30 23:42:02.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1698 delete e2e-test-crd-publish-openapi-3178-crds test-cr' Jun 30 23:42:03.086: INFO: stderr: "" Jun 30 23:42:03.086: INFO: stdout: "e2e-test-crd-publish-openapi-3178-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 30 23:42:03.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3178-crds' Jun 30 23:42:04.309: INFO: stderr: "" Jun 30 23:42:04.309: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3178-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:42:07.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1698" for this suite. • [SLOW TEST:12.180 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":294,"completed":15,"skipped":184,"failed":0} SSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:42:07.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:42:07.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6915" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":294,"completed":16,"skipped":187,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:42:07.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 30 23:42:07.897: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 30 23:42:09.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157327, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157327, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157327, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157327, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 30 23:42:12.954: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:42:12.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5775-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:42:14.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8360" for this suite. STEP: Destroying namespace "webhook-8360-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.024 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":294,"completed":17,"skipped":197,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:42:14.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 30 23:42:15.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 30 23:42:17.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 30 23:42:19.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157335, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 30 23:42:22.179: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:42:22.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6688" for this suite. STEP: Destroying namespace "webhook-6688-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.120 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":294,"completed":18,"skipped":198,"failed":0} S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:42:22.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-248d8162-9fb0-4a36-a97c-43e13dc656cc STEP: Creating secret with name s-test-opt-upd-20c33e07-8791-4221-af38-fb95831c7bca STEP: Creating the pod STEP: Deleting secret s-test-opt-del-248d8162-9fb0-4a36-a97c-43e13dc656cc STEP: Updating secret s-test-opt-upd-20c33e07-8791-4221-af38-fb95831c7bca STEP: Creating secret with name s-test-opt-create-40caf6a0-c717-4e7c-bc14-5499383e3703 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:42:30.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4855" for this suite. • [SLOW TEST:8.268 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":19,"skipped":199,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:42:30.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-e80a6775-6060-4777-828d-123ae3b83c80 STEP: Creating secret with name s-test-opt-upd-2aea3322-e536-4a5d-a6f6-07731faa89cf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e80a6775-6060-4777-828d-123ae3b83c80 STEP: Updating secret s-test-opt-upd-2aea3322-e536-4a5d-a6f6-07731faa89cf STEP: Creating secret with name s-test-opt-create-7e813582-d0ea-43d6-ad6a-943d1faf0add STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:42:39.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3555" for this suite. • [SLOW TEST:8.326 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":20,"skipped":215,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:42:39.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 30 23:42:39.201: INFO: Waiting up to 5m0s for pod "pod-688fd599-4558-4d55-b7a9-ace870845d61" in namespace "emptydir-5629" to be "Succeeded or Failed" Jun 30 23:42:39.244: INFO: Pod "pod-688fd599-4558-4d55-b7a9-ace870845d61": Phase="Pending", Reason="", readiness=false. Elapsed: 43.367501ms Jun 30 23:42:41.323: INFO: Pod "pod-688fd599-4558-4d55-b7a9-ace870845d61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1221641s Jun 30 23:42:43.327: INFO: Pod "pod-688fd599-4558-4d55-b7a9-ace870845d61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125738406s STEP: Saw pod success Jun 30 23:42:43.327: INFO: Pod "pod-688fd599-4558-4d55-b7a9-ace870845d61" satisfied condition "Succeeded or Failed" Jun 30 23:42:43.331: INFO: Trying to get logs from node latest-worker2 pod pod-688fd599-4558-4d55-b7a9-ace870845d61 container test-container: STEP: delete the pod Jun 30 23:42:43.396: INFO: Waiting for pod pod-688fd599-4558-4d55-b7a9-ace870845d61 to disappear Jun 30 23:42:43.423: INFO: Pod pod-688fd599-4558-4d55-b7a9-ace870845d61 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:42:43.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5629" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":21,"skipped":228,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:42:43.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-bkkd STEP: Creating a pod to test atomic-volume-subpath Jun 30 23:42:43.616: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bkkd" in namespace "subpath-4199" to be "Succeeded or Failed" Jun 30 23:42:43.620: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.70537ms Jun 30 23:42:45.634: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017583386s Jun 30 23:42:47.639: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022853353s Jun 30 23:42:49.644: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 6.027364053s Jun 30 23:42:51.649: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 8.03203753s Jun 30 23:42:53.653: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 10.036509087s Jun 30 23:42:55.657: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 12.040953726s Jun 30 23:42:57.662: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 14.045083698s Jun 30 23:42:59.666: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 16.049742068s Jun 30 23:43:01.671: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 18.05460481s Jun 30 23:43:03.676: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 20.059235764s Jun 30 23:43:05.680: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 22.063857461s Jun 30 23:43:07.685: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Running", Reason="", readiness=true. Elapsed: 24.06848925s Jun 30 23:43:09.690: INFO: Pod "pod-subpath-test-downwardapi-bkkd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.073054147s STEP: Saw pod success Jun 30 23:43:09.690: INFO: Pod "pod-subpath-test-downwardapi-bkkd" satisfied condition "Succeeded or Failed" Jun 30 23:43:09.693: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-bkkd container test-container-subpath-downwardapi-bkkd: STEP: delete the pod Jun 30 23:43:09.736: INFO: Waiting for pod pod-subpath-test-downwardapi-bkkd to disappear Jun 30 23:43:09.747: INFO: Pod pod-subpath-test-downwardapi-bkkd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bkkd Jun 30 23:43:09.747: INFO: Deleting pod "pod-subpath-test-downwardapi-bkkd" in namespace "subpath-4199" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:09.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4199" for this suite. • [SLOW TEST:26.326 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":294,"completed":22,"skipped":239,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:09.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 30 23:43:09.846: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3" in namespace "downward-api-413" to be "Succeeded or Failed" Jun 30 23:43:09.862: INFO: Pod "downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.046693ms Jun 30 23:43:11.866: INFO: Pod "downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02026957s Jun 30 23:43:13.871: INFO: Pod "downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024985619s STEP: Saw pod success Jun 30 23:43:13.871: INFO: Pod "downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3" satisfied condition "Succeeded or Failed" Jun 30 23:43:13.874: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3 container client-container: STEP: delete the pod Jun 30 23:43:13.906: INFO: Waiting for pod downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3 to disappear Jun 30 23:43:13.910: INFO: Pod downwardapi-volume-c7216d1f-3c39-4ab4-870b-dc7bdeaa60f3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:13.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-413" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":23,"skipped":242,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:13.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-eb04a92d-bead-4ba5-9c29-51fd18a1a9cf STEP: Creating a pod to test consume secrets Jun 30 23:43:14.039: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c" in namespace "projected-976" to be "Succeeded or Failed" Jun 30 23:43:14.048: INFO: Pod "pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551916ms Jun 30 23:43:16.053: INFO: Pod "pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013897563s Jun 30 23:43:18.057: INFO: Pod "pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018074517s STEP: Saw pod success Jun 30 23:43:18.057: INFO: Pod "pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c" satisfied condition "Succeeded or Failed" Jun 30 23:43:18.060: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c container projected-secret-volume-test: STEP: delete the pod Jun 30 23:43:18.116: INFO: Waiting for pod pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c to disappear Jun 30 23:43:18.276: INFO: Pod pod-projected-secrets-74c6af01-77cb-4f68-9f36-d7e45aa2e29c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:18.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-976" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":24,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:18.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:43:18.465: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:24.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2109" for this suite. • [SLOW TEST:6.487 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":294,"completed":25,"skipped":268,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:24.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jun 30 23:43:24.882: INFO: Created pod &Pod{ObjectMeta:{dns-3620 dns-3620 /api/v1/namespaces/dns-3620/pods/dns-3620 9d6396fe-b7cd-4436-8aa4-9a5518d75eb3 17233280 0 2020-06-30 23:43:24 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-06-30 23:43:24 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6hv7g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6hv7g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6hv7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:43:24.899: INFO: The status of Pod dns-3620 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:43:27.007: INFO: The status of Pod dns-3620 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:43:28.910: INFO: The status of Pod dns-3620 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jun 30 23:43:28.910: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3620 PodName:dns-3620 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 30 23:43:28.910: INFO: >>> kubeConfig: /root/.kube/config I0630 23:43:28.947749 8 log.go:172] (0xc006177ad0) (0xc001d4d040) Create stream I0630 23:43:28.947774 8 log.go:172] (0xc006177ad0) (0xc001d4d040) Stream added, broadcasting: 1 I0630 23:43:28.950134 8 log.go:172] (0xc006177ad0) Reply frame received for 1 I0630 23:43:28.950199 8 log.go:172] (0xc006177ad0) (0xc0021d2fa0) Create stream I0630 23:43:28.950217 8 log.go:172] (0xc006177ad0) (0xc0021d2fa0) Stream added, broadcasting: 3 I0630 23:43:28.951033 8 log.go:172] (0xc006177ad0) Reply frame received for 3 I0630 23:43:28.951054 8 log.go:172] (0xc006177ad0) (0xc0021d3040) Create stream I0630 23:43:28.951061 8 log.go:172] (0xc006177ad0) (0xc0021d3040) Stream added, broadcasting: 5 I0630 23:43:28.951882 8 log.go:172] (0xc006177ad0) Reply frame received for 5 I0630 23:43:29.067023 8 log.go:172] (0xc006177ad0) Data frame received for 3 I0630 23:43:29.067056 8 log.go:172] (0xc0021d2fa0) (3) Data frame handling I0630 23:43:29.067082 8 log.go:172] (0xc0021d2fa0) (3) Data frame sent I0630 23:43:29.068480 8 log.go:172] (0xc006177ad0) Data frame received for 3 I0630 23:43:29.068506 8 log.go:172] (0xc0021d2fa0) (3) Data frame handling I0630 23:43:29.068604 8 log.go:172] (0xc006177ad0) Data frame received for 5 I0630 23:43:29.068641 8 log.go:172] (0xc0021d3040) (5) Data frame handling I0630 23:43:29.070632 8 log.go:172] (0xc006177ad0) Data frame received for 1 I0630 23:43:29.070664 8 log.go:172] (0xc001d4d040) (1) Data frame handling I0630 23:43:29.070694 8 log.go:172] (0xc001d4d040) (1) Data frame sent I0630 23:43:29.070730 8 log.go:172] (0xc006177ad0) (0xc001d4d040) Stream removed, broadcasting: 1 I0630 23:43:29.070751 8 log.go:172] (0xc006177ad0) Go away received I0630 23:43:29.070855 8 log.go:172] (0xc006177ad0) (0xc001d4d040) Stream removed, broadcasting: 1 I0630 23:43:29.070894 8 log.go:172] (0xc006177ad0) (0xc0021d2fa0) Stream removed, broadcasting: 3 I0630 23:43:29.070907 8 log.go:172] (0xc006177ad0) (0xc0021d3040) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jun 30 23:43:29.070: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3620 PodName:dns-3620 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 30 23:43:29.070: INFO: >>> kubeConfig: /root/.kube/config I0630 23:43:29.107137 8 log.go:172] (0xc0065fe210) (0xc0020f59a0) Create stream I0630 23:43:29.107176 8 log.go:172] (0xc0065fe210) (0xc0020f59a0) Stream added, broadcasting: 1 I0630 23:43:29.109743 8 log.go:172] (0xc0065fe210) Reply frame received for 1 I0630 23:43:29.109779 8 log.go:172] (0xc0065fe210) (0xc0021d30e0) Create stream I0630 23:43:29.109787 8 log.go:172] (0xc0065fe210) (0xc0021d30e0) Stream added, broadcasting: 3 I0630 23:43:29.110685 8 log.go:172] (0xc0065fe210) Reply frame received for 3 I0630 23:43:29.110726 8 log.go:172] (0xc0065fe210) (0xc0021d3180) Create stream I0630 23:43:29.110901 8 log.go:172] (0xc0065fe210) (0xc0021d3180) Stream added, broadcasting: 5 I0630 23:43:29.111948 8 log.go:172] (0xc0065fe210) Reply frame received for 5 I0630 23:43:29.210321 8 log.go:172] (0xc0065fe210) Data frame received for 3 I0630 23:43:29.210354 8 log.go:172] (0xc0021d30e0) (3) Data frame handling I0630 23:43:29.210376 8 log.go:172] (0xc0021d30e0) (3) Data frame sent I0630 23:43:29.212049 8 log.go:172] (0xc0065fe210) Data frame received for 3 I0630 23:43:29.212096 8 log.go:172] (0xc0021d30e0) (3) Data frame handling I0630 23:43:29.212425 8 log.go:172] (0xc0065fe210) Data frame received for 5 I0630 23:43:29.212454 8 log.go:172] (0xc0021d3180) (5) Data frame handling I0630 23:43:29.214821 8 log.go:172] (0xc0065fe210) Data frame received for 1 I0630 23:43:29.214890 8 log.go:172] (0xc0020f59a0) (1) Data frame handling I0630 23:43:29.214961 8 log.go:172] (0xc0020f59a0) (1) Data frame sent I0630 23:43:29.215073 8 log.go:172] (0xc0065fe210) (0xc0020f59a0) Stream removed, broadcasting: 1 I0630 23:43:29.215143 8 log.go:172] (0xc0065fe210) Go away received I0630 23:43:29.215274 8 log.go:172] (0xc0065fe210) (0xc0020f59a0) Stream removed, broadcasting: 1 I0630 23:43:29.215303 8 log.go:172] (0xc0065fe210) (0xc0021d30e0) Stream removed, broadcasting: 3 I0630 23:43:29.215329 8 log.go:172] (0xc0065fe210) (0xc0021d3180) Stream removed, broadcasting: 5 Jun 30 23:43:29.215: INFO: Deleting pod dns-3620... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:29.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3620" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":294,"completed":26,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:29.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-85cf9561-ccba-481e-a677-3e760a0799af STEP: Creating secret with name secret-projected-all-test-volume-fb2363fe-ad4a-41ed-b0e4-a3f1bb84d0ca STEP: Creating a pod to test Check all projections for projected volume plugin Jun 30 23:43:29.661: INFO: Waiting up to 5m0s for pod "projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd" in namespace "projected-7347" to be "Succeeded or Failed" Jun 30 23:43:29.682: INFO: Pod "projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.248152ms Jun 30 23:43:31.687: INFO: Pod "projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025739641s Jun 30 23:43:33.692: INFO: Pod "projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03048528s STEP: Saw pod success Jun 30 23:43:33.692: INFO: Pod "projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd" satisfied condition "Succeeded or Failed" Jun 30 23:43:33.694: INFO: Trying to get logs from node latest-worker pod projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd container projected-all-volume-test: STEP: delete the pod Jun 30 23:43:33.756: INFO: Waiting for pod projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd to disappear Jun 30 23:43:33.766: INFO: Pod projected-volume-5a62ba25-cca3-4885-8869-bd3f691745dd no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:33.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7347" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":294,"completed":27,"skipped":308,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:33.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:43:33.832: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jun 30 23:43:37.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8039 create -f -' Jun 30 23:43:43.186: INFO: stderr: "" Jun 30 23:43:43.186: INFO: stdout: "e2e-test-crd-publish-openapi-4395-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 30 23:43:43.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8039 delete e2e-test-crd-publish-openapi-4395-crds test-cr' Jun 30 23:43:43.308: INFO: stderr: "" Jun 30 23:43:43.308: INFO: stdout: "e2e-test-crd-publish-openapi-4395-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jun 30 23:43:43.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8039 apply -f -' Jun 30 23:43:46.483: INFO: stderr: "" Jun 30 23:43:46.483: INFO: stdout: "e2e-test-crd-publish-openapi-4395-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jun 30 23:43:46.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8039 delete e2e-test-crd-publish-openapi-4395-crds test-cr' Jun 30 23:43:46.619: INFO: stderr: "" Jun 30 23:43:46.619: INFO: stdout: "e2e-test-crd-publish-openapi-4395-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jun 30 23:43:46.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4395-crds' Jun 30 23:43:47.452: INFO: stderr: "" Jun 30 23:43:47.452: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4395-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:50.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8039" for this suite. • [SLOW TEST:16.560 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":294,"completed":28,"skipped":311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:50.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Jun 30 23:43:50.490: INFO: Waiting up to 5m0s for pod "pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf" in namespace "emptydir-3456" to be "Succeeded or Failed" Jun 30 23:43:50.492: INFO: Pod "pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.607336ms Jun 30 23:43:52.538: INFO: Pod "pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048634698s Jun 30 23:43:54.543: INFO: Pod "pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf": Phase="Running", Reason="", readiness=true. Elapsed: 4.053088934s Jun 30 23:43:56.547: INFO: Pod "pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057453695s STEP: Saw pod success Jun 30 23:43:56.547: INFO: Pod "pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf" satisfied condition "Succeeded or Failed" Jun 30 23:43:56.551: INFO: Trying to get logs from node latest-worker2 pod pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf container test-container: STEP: delete the pod Jun 30 23:43:56.590: INFO: Waiting for pod pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf to disappear Jun 30 23:43:56.604: INFO: Pod pod-ef6ae499-cd6c-4e3b-b267-20abd2eccaaf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:56.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3456" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":29,"skipped":334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:56.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:43:56.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4103" for this suite. STEP: Destroying namespace "nspatchtest-b1bd695d-6e44-4198-861d-2c0edd350f78-2338" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":294,"completed":30,"skipped":366,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:43:56.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 30 23:43:57.448: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 30 23:43:59.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 30 23:44:01.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157437, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 30 23:44:04.495: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:44:05.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8148" for this suite. STEP: Destroying namespace "webhook-8148-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.309 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":294,"completed":31,"skipped":373,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:44:05.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:85 Jun 30 23:44:05.313: INFO: Waiting up to 1m0s for all nodes to be ready Jun 30 23:45:05.336: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jun 30 23:45:05.368: INFO: Created pod: pod0-sched-preemption-low-priority Jun 30 23:45:05.443: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:45:29.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-401" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:75 • [SLOW TEST:84.334 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":294,"completed":32,"skipped":389,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:45:29.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 30 23:45:29.616: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:45:35.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5143" for this suite. • [SLOW TEST:6.169 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":294,"completed":33,"skipped":398,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:45:35.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:46:35.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4434" for this suite. • [SLOW TEST:60.294 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":294,"completed":34,"skipped":411,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:46:36.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-z25z STEP: Creating a pod to test atomic-volume-subpath Jun 30 23:46:36.155: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-z25z" in namespace "subpath-7389" to be "Succeeded or Failed" Jun 30 23:46:36.178: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.938976ms Jun 30 23:46:38.223: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067850113s Jun 30 23:46:40.227: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 4.072427031s Jun 30 23:46:42.232: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 6.077288365s Jun 30 23:46:44.237: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 8.081471042s Jun 30 23:46:46.240: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 10.085334876s Jun 30 23:46:48.244: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 12.089394644s Jun 30 23:46:50.251: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 14.09598866s Jun 30 23:46:52.256: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 16.101193403s Jun 30 23:46:54.261: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 18.10608617s Jun 30 23:46:56.266: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 20.111020318s Jun 30 23:46:58.271: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 22.115807918s Jun 30 23:47:00.275: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Running", Reason="", readiness=true. Elapsed: 24.120365393s Jun 30 23:47:02.280: INFO: Pod "pod-subpath-test-configmap-z25z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.124535922s STEP: Saw pod success Jun 30 23:47:02.280: INFO: Pod "pod-subpath-test-configmap-z25z" satisfied condition "Succeeded or Failed" Jun 30 23:47:02.283: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-z25z container test-container-subpath-configmap-z25z: STEP: delete the pod Jun 30 23:47:02.334: INFO: Waiting for pod pod-subpath-test-configmap-z25z to disappear Jun 30 23:47:02.349: INFO: Pod pod-subpath-test-configmap-z25z no longer exists STEP: Deleting pod pod-subpath-test-configmap-z25z Jun 30 23:47:02.349: INFO: Deleting pod "pod-subpath-test-configmap-z25z" in namespace "subpath-7389" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:47:02.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7389" for this suite. • [SLOW TEST:26.354 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":294,"completed":35,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:47:02.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jun 30 23:47:02.436: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:47:10.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1339" for this suite. • [SLOW TEST:8.050 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":294,"completed":36,"skipped":446,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:47:10.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 30 23:47:10.486: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 45b30021-c63d-4f7c-9c12-77579608e3dd 17234346 0 2020-06-30 23:47:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-30 23:47:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jun 30 23:47:10.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 45b30021-c63d-4f7c-9c12-77579608e3dd 17234347 0 2020-06-30 23:47:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-30 23:47:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 30 23:47:10.486: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 45b30021-c63d-4f7c-9c12-77579608e3dd 17234348 0 2020-06-30 23:47:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-30 23:47:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 30 23:47:20.588: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 45b30021-c63d-4f7c-9c12-77579608e3dd 17234394 0 2020-06-30 23:47:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-30 23:47:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 30 23:47:20.588: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 45b30021-c63d-4f7c-9c12-77579608e3dd 17234395 0 2020-06-30 23:47:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-30 23:47:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jun 30 23:47:20.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8733 /api/v1/namespaces/watch-8733/configmaps/e2e-watch-test-label-changed 45b30021-c63d-4f7c-9c12-77579608e3dd 17234396 0 2020-06-30 23:47:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-06-30 23:47:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:47:20.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8733" for this suite. • [SLOW TEST:10.199 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":294,"completed":37,"skipped":454,"failed":0} [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:47:20.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0630 23:47:32.522073 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 30 23:47:32.522: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jun 30 23:47:32.522: INFO: Deleting pod "simpletest-rc-to-be-deleted-7vlgr" in namespace "gc-9066" Jun 30 23:47:32.743: INFO: Deleting pod "simpletest-rc-to-be-deleted-8pdp6" in namespace "gc-9066" Jun 30 23:47:32.855: INFO: Deleting pod "simpletest-rc-to-be-deleted-bpq9n" in namespace "gc-9066" Jun 30 23:47:32.905: INFO: Deleting pod "simpletest-rc-to-be-deleted-d9dd6" in namespace "gc-9066" Jun 30 23:47:33.106: INFO: Deleting pod "simpletest-rc-to-be-deleted-gmwtp" in namespace "gc-9066" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:47:33.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9066" for this suite. • [SLOW TEST:12.790 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":294,"completed":38,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:47:33.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:47:33.779: INFO: Creating deployment "test-recreate-deployment" Jun 30 23:47:33.783: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 30 23:47:33.937: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 30 23:47:35.945: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 30 23:47:35.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157654, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157654, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157654, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157653, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 30 23:47:37.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157654, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157654, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157654, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729157653, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 30 23:47:39.972: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 30 23:47:39.978: INFO: Updating deployment test-recreate-deployment Jun 30 23:47:39.979: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 30 23:47:41.967: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9734 /apis/apps/v1/namespaces/deployment-9734/deployments/test-recreate-deployment 7a9f75d0-e78c-4cfc-975b-4f4ee3979473 17234699 2 2020-06-30 23:47:33 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-30 23:47:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-30 23:47:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041bb248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-30 23:47:41 +0000 UTC,LastTransitionTime:2020-06-30 23:47:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-06-30 23:47:41 +0000 UTC,LastTransitionTime:2020-06-30 23:47:33 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jun 30 23:47:41.970: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-9734 /apis/apps/v1/namespaces/deployment-9734/replicasets/test-recreate-deployment-d5667d9c7 3ed5e588-37f9-4b50-a02a-69a86313a7d2 17234696 1 2020-06-30 23:47:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 7a9f75d0-e78c-4cfc-975b-4f4ee3979473 0xc0041bb750 0xc0041bb751}] [] [{kube-controller-manager Update apps/v1 2020-06-30 23:47:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9f75d0-e78c-4cfc-975b-4f4ee3979473\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041bb7c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 30 23:47:41.971: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 30 23:47:41.971: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-9734 /apis/apps/v1/namespaces/deployment-9734/replicasets/test-recreate-deployment-6d65b9f6d8 ba0b44ae-7ca7-4689-8a31-e95326ef546c 17234685 2 2020-06-30 23:47:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 7a9f75d0-e78c-4cfc-975b-4f4ee3979473 0xc0041bb657 0xc0041bb658}] [] [{kube-controller-manager Update apps/v1 2020-06-30 23:47:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9f75d0-e78c-4cfc-975b-4f4ee3979473\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041bb6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 30 23:47:42.111: INFO: Pod "test-recreate-deployment-d5667d9c7-lbw5t" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-lbw5t test-recreate-deployment-d5667d9c7- deployment-9734 /api/v1/namespaces/deployment-9734/pods/test-recreate-deployment-d5667d9c7-lbw5t 141fd04e-006a-4a8b-8d24-ada76ff7954b 17234700 0 2020-06-30 23:47:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 3ed5e588-37f9-4b50-a02a-69a86313a7d2 0xc0041bbca0 0xc0041bbca1}] [] [{kube-controller-manager Update v1 2020-06-30 23:47:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3ed5e588-37f9-4b50-a02a-69a86313a7d2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:47:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qwtlq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qwtlq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qwtlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:47:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:47:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:47:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:47:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:47:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:47:42.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9734" for this suite. • [SLOW TEST:8.726 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":39,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:47:42.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4794 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 30 23:47:42.352: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 30 23:47:42.480: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:47:44.484: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:47:46.715: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 30 23:47:48.527: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:47:50.491: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:47:52.484: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:47:54.484: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:47:56.484: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:47:58.485: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:48:00.484: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 30 23:48:02.485: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 30 23:48:02.491: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 30 23:48:06.545: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostname&protocol=http&host=10.244.1.59&port=8080&tries=1'] Namespace:pod-network-test-4794 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 30 23:48:06.545: INFO: >>> kubeConfig: /root/.kube/config I0630 23:48:06.577782 8 log.go:172] (0xc002e3ef20) (0xc0020f4c80) Create stream I0630 23:48:06.577828 8 log.go:172] (0xc002e3ef20) (0xc0020f4c80) Stream added, broadcasting: 1 I0630 23:48:06.580050 8 log.go:172] (0xc002e3ef20) Reply frame received for 1 I0630 23:48:06.580086 8 log.go:172] (0xc002e3ef20) (0xc0025fe000) Create stream I0630 23:48:06.580097 8 log.go:172] (0xc002e3ef20) (0xc0025fe000) Stream added, broadcasting: 3 I0630 23:48:06.580921 8 log.go:172] (0xc002e3ef20) Reply frame received for 3 I0630 23:48:06.580967 8 log.go:172] (0xc002e3ef20) (0xc0020f4e60) Create stream I0630 23:48:06.580979 8 log.go:172] (0xc002e3ef20) (0xc0020f4e60) Stream added, broadcasting: 5 I0630 23:48:06.581908 8 log.go:172] (0xc002e3ef20) Reply frame received for 5 I0630 23:48:06.768681 8 log.go:172] (0xc002e3ef20) Data frame received for 3 I0630 23:48:06.768707 8 log.go:172] (0xc0025fe000) (3) Data frame handling I0630 23:48:06.768722 8 log.go:172] (0xc0025fe000) (3) Data frame sent I0630 23:48:06.769427 8 log.go:172] (0xc002e3ef20) Data frame received for 3 I0630 23:48:06.769461 8 log.go:172] (0xc0025fe000) (3) Data frame handling I0630 23:48:06.769650 8 log.go:172] (0xc002e3ef20) Data frame received for 5 I0630 23:48:06.769674 8 log.go:172] (0xc0020f4e60) (5) Data frame handling I0630 23:48:06.771037 8 log.go:172] (0xc002e3ef20) Data frame received for 1 I0630 23:48:06.771057 8 log.go:172] (0xc0020f4c80) (1) Data frame handling I0630 23:48:06.771075 8 log.go:172] (0xc0020f4c80) (1) Data frame sent I0630 23:48:06.771090 8 log.go:172] (0xc002e3ef20) (0xc0020f4c80) Stream removed, broadcasting: 1 I0630 23:48:06.771103 8 log.go:172] (0xc002e3ef20) Go away received I0630 23:48:06.771219 8 log.go:172] (0xc002e3ef20) (0xc0020f4c80) Stream removed, broadcasting: 1 I0630 23:48:06.771240 8 log.go:172] (0xc002e3ef20) (0xc0025fe000) Stream removed, broadcasting: 3 I0630 23:48:06.771250 8 log.go:172] (0xc002e3ef20) (0xc0020f4e60) Stream removed, broadcasting: 5 Jun 30 23:48:06.771: INFO: Waiting for responses: map[] Jun 30 23:48:06.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.110:8080/dial?request=hostname&protocol=http&host=10.244.2.109&port=8080&tries=1'] Namespace:pod-network-test-4794 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 30 23:48:06.774: INFO: >>> kubeConfig: /root/.kube/config I0630 23:48:06.805830 8 log.go:172] (0xc001eb5970) (0xc0020f54a0) Create stream I0630 23:48:06.805870 8 log.go:172] (0xc001eb5970) (0xc0020f54a0) Stream added, broadcasting: 1 I0630 23:48:06.809330 8 log.go:172] (0xc001eb5970) Reply frame received for 1 I0630 23:48:06.809373 8 log.go:172] (0xc001eb5970) (0xc00208adc0) Create stream I0630 23:48:06.809385 8 log.go:172] (0xc001eb5970) (0xc00208adc0) Stream added, broadcasting: 3 I0630 23:48:06.810194 8 log.go:172] (0xc001eb5970) Reply frame received for 3 I0630 23:48:06.810225 8 log.go:172] (0xc001eb5970) (0xc0020f5540) Create stream I0630 23:48:06.810237 8 log.go:172] (0xc001eb5970) (0xc0020f5540) Stream added, broadcasting: 5 I0630 23:48:06.810945 8 log.go:172] (0xc001eb5970) Reply frame received for 5 I0630 23:48:06.868797 8 log.go:172] (0xc001eb5970) Data frame received for 3 I0630 23:48:06.868838 8 log.go:172] (0xc00208adc0) (3) Data frame handling I0630 23:48:06.868871 8 log.go:172] (0xc00208adc0) (3) Data frame sent I0630 23:48:06.869700 8 log.go:172] (0xc001eb5970) Data frame received for 3 I0630 23:48:06.869855 8 log.go:172] (0xc00208adc0) (3) Data frame handling I0630 23:48:06.869893 8 log.go:172] (0xc001eb5970) Data frame received for 5 I0630 23:48:06.869924 8 log.go:172] (0xc0020f5540) (5) Data frame handling I0630 23:48:06.871380 8 log.go:172] (0xc001eb5970) Data frame received for 1 I0630 23:48:06.871418 8 log.go:172] (0xc0020f54a0) (1) Data frame handling I0630 23:48:06.871450 8 log.go:172] (0xc0020f54a0) (1) Data frame sent I0630 23:48:06.871485 8 log.go:172] (0xc001eb5970) (0xc0020f54a0) Stream removed, broadcasting: 1 I0630 23:48:06.871607 8 log.go:172] (0xc001eb5970) (0xc0020f54a0) Stream removed, broadcasting: 1 I0630 23:48:06.871633 8 log.go:172] (0xc001eb5970) (0xc00208adc0) Stream removed, broadcasting: 3 I0630 23:48:06.871785 8 log.go:172] (0xc001eb5970) Go away received I0630 23:48:06.872077 8 log.go:172] (0xc001eb5970) (0xc0020f5540) Stream removed, broadcasting: 5 Jun 30 23:48:06.872: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:48:06.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4794" for this suite. • [SLOW TEST:24.750 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":294,"completed":40,"skipped":511,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:48:06.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8207 STEP: creating service affinity-clusterip-transition in namespace services-8207 STEP: creating replication controller affinity-clusterip-transition in namespace services-8207 I0630 23:48:07.117085 8 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8207, replica count: 3 I0630 23:48:10.167724 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0630 23:48:13.168072 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 30 23:48:13.179: INFO: Creating new exec pod Jun 30 23:48:18.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinityd5b55 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jun 30 23:48:18.526: INFO: stderr: "I0630 23:48:18.401951 440 log.go:172] (0xc000b92d10) (0xc000709360) Create stream\nI0630 23:48:18.401998 440 log.go:172] (0xc000b92d10) (0xc000709360) Stream added, broadcasting: 1\nI0630 23:48:18.404093 440 log.go:172] (0xc000b92d10) Reply frame received for 1\nI0630 23:48:18.404123 440 log.go:172] (0xc000b92d10) (0xc00063c0a0) Create stream\nI0630 23:48:18.404131 440 log.go:172] (0xc000b92d10) (0xc00063c0a0) Stream added, broadcasting: 3\nI0630 23:48:18.405323 440 log.go:172] (0xc000b92d10) Reply frame received for 3\nI0630 23:48:18.405377 440 log.go:172] (0xc000b92d10) (0xc00072a140) Create stream\nI0630 23:48:18.405409 440 log.go:172] (0xc000b92d10) (0xc00072a140) Stream added, broadcasting: 5\nI0630 23:48:18.406402 440 log.go:172] (0xc000b92d10) Reply frame received for 5\nI0630 23:48:18.507719 440 log.go:172] (0xc000b92d10) Data frame received for 5\nI0630 23:48:18.507768 440 log.go:172] (0xc00072a140) (5) Data frame handling\nI0630 23:48:18.507874 440 log.go:172] (0xc00072a140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0630 23:48:18.517704 440 log.go:172] (0xc000b92d10) Data frame received for 5\nI0630 23:48:18.517741 440 log.go:172] (0xc00072a140) (5) Data frame handling\nI0630 23:48:18.517785 440 log.go:172] (0xc00072a140) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0630 23:48:18.517965 440 log.go:172] (0xc000b92d10) Data frame received for 3\nI0630 23:48:18.517985 440 log.go:172] (0xc00063c0a0) (3) Data frame handling\nI0630 23:48:18.518193 440 log.go:172] (0xc000b92d10) Data frame received for 5\nI0630 23:48:18.518210 440 log.go:172] (0xc00072a140) (5) Data frame handling\nI0630 23:48:18.520073 440 log.go:172] (0xc000b92d10) Data frame received for 1\nI0630 23:48:18.520099 440 log.go:172] (0xc000709360) (1) Data frame handling\nI0630 23:48:18.520116 440 log.go:172] (0xc000709360) (1) Data frame sent\nI0630 23:48:18.520133 440 log.go:172] (0xc000b92d10) (0xc000709360) Stream removed, broadcasting: 1\nI0630 23:48:18.520160 440 log.go:172] (0xc000b92d10) Go away received\nI0630 23:48:18.520455 440 log.go:172] (0xc000b92d10) (0xc000709360) Stream removed, broadcasting: 1\nI0630 23:48:18.520472 440 log.go:172] (0xc000b92d10) (0xc00063c0a0) Stream removed, broadcasting: 3\nI0630 23:48:18.520479 440 log.go:172] (0xc000b92d10) (0xc00072a140) Stream removed, broadcasting: 5\n" Jun 30 23:48:18.526: INFO: stdout: "" Jun 30 23:48:18.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinityd5b55 -- /bin/sh -x -c nc -zv -t -w 2 10.107.165.197 80' Jun 30 23:48:18.762: INFO: stderr: "I0630 23:48:18.681400 460 log.go:172] (0xc000af5080) (0xc000bfe280) Create stream\nI0630 23:48:18.681455 460 log.go:172] (0xc000af5080) (0xc000bfe280) Stream added, broadcasting: 1\nI0630 23:48:18.686432 460 log.go:172] (0xc000af5080) Reply frame received for 1\nI0630 23:48:18.686481 460 log.go:172] (0xc000af5080) (0xc0006fa960) Create stream\nI0630 23:48:18.686497 460 log.go:172] (0xc000af5080) (0xc0006fa960) Stream added, broadcasting: 3\nI0630 23:48:18.687511 460 log.go:172] (0xc000af5080) Reply frame received for 3\nI0630 23:48:18.687548 460 log.go:172] (0xc000af5080) (0xc0000f3900) Create stream\nI0630 23:48:18.687560 460 log.go:172] (0xc000af5080) (0xc0000f3900) Stream added, broadcasting: 5\nI0630 23:48:18.688376 460 log.go:172] (0xc000af5080) Reply frame received for 5\nI0630 23:48:18.752637 460 log.go:172] (0xc000af5080) Data frame received for 3\nI0630 23:48:18.752679 460 log.go:172] (0xc0006fa960) (3) Data frame handling\nI0630 23:48:18.752706 460 log.go:172] (0xc000af5080) Data frame received for 5\nI0630 23:48:18.752718 460 log.go:172] (0xc0000f3900) (5) Data frame handling\nI0630 23:48:18.752736 460 log.go:172] (0xc0000f3900) (5) Data frame sent\nI0630 23:48:18.752748 460 log.go:172] (0xc000af5080) Data frame received for 5\nI0630 23:48:18.752758 460 log.go:172] (0xc0000f3900) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.165.197 80\nConnection to 10.107.165.197 80 port [tcp/http] succeeded!\nI0630 23:48:18.754351 460 log.go:172] (0xc000af5080) Data frame received for 1\nI0630 23:48:18.754394 460 log.go:172] (0xc000bfe280) (1) Data frame handling\nI0630 23:48:18.754435 460 log.go:172] (0xc000bfe280) (1) Data frame sent\nI0630 23:48:18.754469 460 log.go:172] (0xc000af5080) (0xc000bfe280) Stream removed, broadcasting: 1\nI0630 23:48:18.754514 460 log.go:172] (0xc000af5080) Go away received\nI0630 23:48:18.755000 460 log.go:172] (0xc000af5080) (0xc000bfe280) Stream removed, broadcasting: 1\nI0630 23:48:18.755025 460 log.go:172] (0xc000af5080) (0xc0006fa960) Stream removed, broadcasting: 3\nI0630 23:48:18.755038 460 log.go:172] (0xc000af5080) (0xc0000f3900) Stream removed, broadcasting: 5\n" Jun 30 23:48:18.762: INFO: stdout: "" Jun 30 23:48:18.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinityd5b55 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.165.197:80/ ; done' Jun 30 23:48:19.153: INFO: stderr: "I0630 23:48:18.919244 480 log.go:172] (0xc00003a420) (0xc00072a820) Create stream\nI0630 23:48:18.919304 480 log.go:172] (0xc00003a420) (0xc00072a820) Stream added, broadcasting: 1\nI0630 23:48:18.921675 480 log.go:172] (0xc00003a420) Reply frame received for 1\nI0630 23:48:18.921719 480 log.go:172] (0xc00003a420) (0xc0006ccaa0) Create stream\nI0630 23:48:18.921731 480 log.go:172] (0xc00003a420) (0xc0006ccaa0) Stream added, broadcasting: 3\nI0630 23:48:18.922925 480 log.go:172] (0xc00003a420) Reply frame received for 3\nI0630 23:48:18.922960 480 log.go:172] (0xc00003a420) (0xc0006c2dc0) Create stream\nI0630 23:48:18.922972 480 log.go:172] (0xc00003a420) (0xc0006c2dc0) Stream added, broadcasting: 5\nI0630 23:48:18.923897 480 log.go:172] (0xc00003a420) Reply frame received for 5\nI0630 23:48:18.983186 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:18.983244 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:18.983282 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:18.983338 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:18.983376 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:18.983405 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.064192 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.064228 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.064250 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.064423 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.064449 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.064474 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.064695 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.064713 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.064729 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.071519 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.071536 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.071556 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.072472 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.072488 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.072508 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.072529 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.072546 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.072561 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\nI0630 23:48:19.077617 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.077631 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.077643 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.077991 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.078007 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.078014 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.078051 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.078061 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.078067 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.083220 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.083236 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.083248 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.083736 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.083766 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.083776 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.083805 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.083847 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.083869 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.088006 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.088032 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.088051 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.088405 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.088431 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.088453 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.088473 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.088486 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.088494 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\nI0630 23:48:19.088503 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.088528 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.088551 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\nI0630 23:48:19.092663 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.092690 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.092727 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.093055 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.093088 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.093106 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.093312 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.093340 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.093367 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.097054 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.097077 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.097314 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.097758 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.097781 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.097810 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.097830 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.097843 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.097855 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.102245 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.102268 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.102292 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.103062 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.103099 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.103143 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.103445 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.103464 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.103484 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.106867 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.106893 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.106914 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.107181 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.107193 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.107200 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.107289 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.107312 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.107338 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.111294 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.111311 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.111329 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.111760 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.111794 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.111805 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.111818 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.111824 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.111831 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\nI0630 23:48:19.111838 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.111844 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.111880 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\nI0630 23:48:19.116115 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.116134 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.116143 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.116614 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.116640 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.116648 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.116675 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.116713 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.116749 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.120237 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.120255 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.120273 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.120659 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.120673 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.120681 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.120897 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.120923 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.120945 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.125457 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.125471 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.125480 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.126104 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.126141 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.126164 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.126186 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.126203 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.126228 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.130233 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.130248 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.130257 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.130680 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.130711 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.130732 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.130801 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.130826 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.130841 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.135274 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.135290 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.135297 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.135758 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.135787 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.135813 480 log.go:172] (0xc0006c2dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.135836 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.135848 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.135869 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.140425 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.140453 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.140476 480 log.go:172] (0xc0006ccaa0) (3) Data frame sent\nI0630 23:48:19.141983 480 log.go:172] (0xc00003a420) Data frame received for 5\nI0630 23:48:19.141998 480 log.go:172] (0xc0006c2dc0) (5) Data frame handling\nI0630 23:48:19.142111 480 log.go:172] (0xc00003a420) Data frame received for 3\nI0630 23:48:19.142142 480 log.go:172] (0xc0006ccaa0) (3) Data frame handling\nI0630 23:48:19.144052 480 log.go:172] (0xc00003a420) Data frame received for 1\nI0630 23:48:19.144093 480 log.go:172] (0xc00072a820) (1) Data frame handling\nI0630 23:48:19.144126 480 log.go:172] (0xc00072a820) (1) Data frame sent\nI0630 23:48:19.144178 480 log.go:172] (0xc00003a420) (0xc00072a820) Stream removed, broadcasting: 1\nI0630 23:48:19.144289 480 log.go:172] (0xc00003a420) Go away received\nI0630 23:48:19.144549 480 log.go:172] (0xc00003a420) (0xc00072a820) Stream removed, broadcasting: 1\nI0630 23:48:19.144568 480 log.go:172] (0xc00003a420) (0xc0006ccaa0) Stream removed, broadcasting: 3\nI0630 23:48:19.144583 480 log.go:172] (0xc00003a420) (0xc0006c2dc0) Stream removed, broadcasting: 5\n" Jun 30 23:48:19.154: INFO: stdout: "\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-tbcmf\naffinity-clusterip-transition-tbcmf\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-tbcmf\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-tbcmf\naffinity-clusterip-transition-btsdz\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf" Jun 30 23:48:19.154: INFO: Received response from host: Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-tbcmf Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-tbcmf Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-tbcmf Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-tbcmf Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-btsdz Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.154: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinityd5b55 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.165.197:80/ ; done' Jun 30 23:48:19.472: INFO: stderr: "I0630 23:48:19.312815 502 log.go:172] (0xc0009614a0) (0xc000c001e0) Create stream\nI0630 23:48:19.312883 502 log.go:172] (0xc0009614a0) (0xc000c001e0) Stream added, broadcasting: 1\nI0630 23:48:19.318985 502 log.go:172] (0xc0009614a0) Reply frame received for 1\nI0630 23:48:19.319053 502 log.go:172] (0xc0009614a0) (0xc0008261e0) Create stream\nI0630 23:48:19.319083 502 log.go:172] (0xc0009614a0) (0xc0008261e0) Stream added, broadcasting: 3\nI0630 23:48:19.320179 502 log.go:172] (0xc0009614a0) Reply frame received for 3\nI0630 23:48:19.320213 502 log.go:172] (0xc0009614a0) (0xc00081e460) Create stream\nI0630 23:48:19.320224 502 log.go:172] (0xc0009614a0) (0xc00081e460) Stream added, broadcasting: 5\nI0630 23:48:19.321805 502 log.go:172] (0xc0009614a0) Reply frame received for 5\nI0630 23:48:19.380087 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.380118 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.380127 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.380162 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.380202 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.380237 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.385751 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.385771 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.385783 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.386205 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.386220 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.386228 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.386242 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.386260 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.386269 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.390399 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.390415 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.390433 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.390991 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.391023 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.391037 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.391055 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.391204 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.391223 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.395306 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.395321 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.395328 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.395719 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.395740 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.395757 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.395864 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.395886 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.395898 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.399847 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.399870 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.399895 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.400252 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.400276 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.400285 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.400299 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.400307 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.400315 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.404672 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.404684 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.404691 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.405231 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.405257 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.405269 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.405309 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.405337 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.405346 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.408865 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.408880 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.408896 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.409343 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.409354 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.409362 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.409384 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.409395 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.409407 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.413378 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.413415 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.413641 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.413714 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.413743 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.413772 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.413851 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.413871 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.413891 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.416922 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.416939 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.416956 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.417658 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.417686 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.417698 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.417715 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.417725 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.417740 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.421543 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.421572 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.421602 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.421829 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.421842 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.421852 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.421900 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.421909 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.421915 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.429357 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.429385 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.429404 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.430293 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.430330 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.430350 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.430378 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.430394 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.430418 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.434177 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.434193 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.434202 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.434555 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.434564 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.434571 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.434638 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.434652 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.434669 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.438156 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.438201 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.438234 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.438277 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.438301 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.438324 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.443136 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.443164 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.443188 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.443581 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.443594 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.443602 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.443818 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.443835 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.443863 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.450532 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.450556 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.450571 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.451544 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.451562 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.451576 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.451594 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.451605 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.451622 502 log.go:172] (0xc00081e460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.455599 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.455624 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.455642 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.456083 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.456097 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.456130 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.456157 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.456184 502 log.go:172] (0xc00081e460) (5) Data frame sent\nI0630 23:48:19.456203 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.456220 502 log.go:172] (0xc00081e460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.165.197:80/\nI0630 23:48:19.456243 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.456265 502 log.go:172] (0xc00081e460) (5) Data frame sent\nI0630 23:48:19.463363 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.463379 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.463387 502 log.go:172] (0xc0008261e0) (3) Data frame sent\nI0630 23:48:19.464084 502 log.go:172] (0xc0009614a0) Data frame received for 3\nI0630 23:48:19.464131 502 log.go:172] (0xc0008261e0) (3) Data frame handling\nI0630 23:48:19.464162 502 log.go:172] (0xc0009614a0) Data frame received for 5\nI0630 23:48:19.464180 502 log.go:172] (0xc00081e460) (5) Data frame handling\nI0630 23:48:19.466011 502 log.go:172] (0xc0009614a0) Data frame received for 1\nI0630 23:48:19.466036 502 log.go:172] (0xc000c001e0) (1) Data frame handling\nI0630 23:48:19.466056 502 log.go:172] (0xc000c001e0) (1) Data frame sent\nI0630 23:48:19.466071 502 log.go:172] (0xc0009614a0) (0xc000c001e0) Stream removed, broadcasting: 1\nI0630 23:48:19.466097 502 log.go:172] (0xc0009614a0) Go away received\nI0630 23:48:19.466424 502 log.go:172] (0xc0009614a0) (0xc000c001e0) Stream removed, broadcasting: 1\nI0630 23:48:19.466446 502 log.go:172] (0xc0009614a0) (0xc0008261e0) Stream removed, broadcasting: 3\nI0630 23:48:19.466457 502 log.go:172] (0xc0009614a0) (0xc00081e460) Stream removed, broadcasting: 5\n" Jun 30 23:48:19.473: INFO: stdout: "\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf\naffinity-clusterip-transition-6n6mf" Jun 30 23:48:19.473: INFO: Received response from host: Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Received response from host: affinity-clusterip-transition-6n6mf Jun 30 23:48:19.473: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8207, will wait for the garbage collector to delete the pods Jun 30 23:48:19.657: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.295347ms Jun 30 23:48:20.257: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.32787ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:48:35.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8207" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:28.425 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":41,"skipped":514,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:48:35.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:48:35.379: INFO: Creating deployment "webserver-deployment" Jun 30 23:48:35.391: INFO: Waiting for observed generation 1 Jun 30 23:48:37.584: INFO: Waiting for all required pods to come up Jun 30 23:48:37.588: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 30 23:48:49.595: INFO: Waiting for deployment "webserver-deployment" to complete Jun 30 23:48:49.601: INFO: Updating deployment "webserver-deployment" with a non-existent image Jun 30 23:48:49.610: INFO: Updating deployment webserver-deployment Jun 30 23:48:49.610: INFO: Waiting for observed generation 2 Jun 30 23:48:51.652: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 30 23:48:51.655: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 30 23:48:51.658: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 30 23:48:51.667: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 30 23:48:51.667: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 30 23:48:51.669: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jun 30 23:48:51.674: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jun 30 23:48:51.674: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jun 30 23:48:51.682: INFO: Updating deployment webserver-deployment Jun 30 23:48:51.682: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jun 30 23:48:52.334: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 30 23:48:52.859: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jun 30 23:48:55.068: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-174 /apis/apps/v1/namespaces/deployment-174/deployments/webserver-deployment 181694d7-305d-4969-bd90-ebb37b291306 17235357 3 2020-06-30 23:48:35 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-06-30 23:48:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041def78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-30 23:48:52 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-06-30 23:48:53 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jun 30 23:48:55.394: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-174 /apis/apps/v1/namespaces/deployment-174/replicasets/webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 17235354 3 2020-06-30 23:48:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 181694d7-305d-4969-bd90-ebb37b291306 0xc0040c34b7 0xc0040c34b8}] [] [{kube-controller-manager Update apps/v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"181694d7-305d-4969-bd90-ebb37b291306\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040c3538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jun 30 23:48:55.394: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jun 30 23:48:55.394: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-174 /apis/apps/v1/namespaces/deployment-174/replicasets/webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 17235345 3 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 181694d7-305d-4969-bd90-ebb37b291306 0xc0040c3597 0xc0040c3598}] [] [{kube-controller-manager Update apps/v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"181694d7-305d-4969-bd90-ebb37b291306\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0040c3608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jun 30 23:48:55.970: INFO: Pod "webserver-deployment-6676bcd6d4-2pd58" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2pd58 webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-2pd58 b3ef5e28-37f5-4014-9826-82cc48a7923b 17235394 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc0040c3b67 0xc0040c3b68}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.970: INFO: Pod "webserver-deployment-6676bcd6d4-6kpjn" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6kpjn webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-6kpjn cd145d30-3c41-471b-bb3e-8e190a747876 17235365 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc0040c3d17 0xc0040c3d18}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.970: INFO: Pod "webserver-deployment-6676bcd6d4-6kx65" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6kx65 webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-6kx65 7a9a5eaa-8f80-422d-8dd0-5f2841480e1c 17235381 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc0040c3ec7 0xc0040c3ec8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.970: INFO: Pod "webserver-deployment-6676bcd6d4-8lcrn" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8lcrn webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-8lcrn aed00224-913e-4ee6-a945-ab10aeb6da59 17235373 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8077 0xc003fe8078}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.971: INFO: Pod "webserver-deployment-6676bcd6d4-8sln6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8sln6 webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-8sln6 a2ca3a26-822a-44b1-b191-b99a2e434148 17235264 0 2020-06-30 23:48:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8227 0xc003fe8228}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.971: INFO: Pod "webserver-deployment-6676bcd6d4-9fgdw" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9fgdw webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-9fgdw 7045cbbd-8ef9-40a6-b100-02002531805a 17235238 0 2020-06-30 23:48:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe83d7 0xc003fe83d8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.971: INFO: Pod "webserver-deployment-6676bcd6d4-9rgq6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9rgq6 webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-9rgq6 e199ad88-8911-47a3-b882-73c17a007f8a 17235368 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8587 0xc003fe8588}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.971: INFO: Pod "webserver-deployment-6676bcd6d4-cggv2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cggv2 webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-cggv2 29427c52-7380-447a-9b8a-04a3b97650e3 17235266 0 2020-06-30 23:48:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8737 0xc003fe8738}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.971: INFO: Pod "webserver-deployment-6676bcd6d4-fftr7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fftr7 webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-fftr7 3812d8a6-1c30-434e-84ea-826a9167cad2 17235377 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe88e7 0xc003fe88e8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.972: INFO: Pod "webserver-deployment-6676bcd6d4-g5hlr" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-g5hlr webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-g5hlr 58e9e9e6-9aaa-4192-af87-d2cc96cf3cd3 17235248 0 2020-06-30 23:48:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8a97 0xc003fe8a98}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.972: INFO: Pod "webserver-deployment-6676bcd6d4-hvbcg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hvbcg webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-hvbcg c4cede37-c035-483b-84cc-46cb37e64b18 17235378 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8c47 0xc003fe8c48}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.972: INFO: Pod "webserver-deployment-6676bcd6d4-qdn48" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qdn48 webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-qdn48 954e7d7b-aec6-4c67-b1e3-df2731346f0f 17235269 0 2020-06-30 23:48:49 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8df7 0xc003fe8df8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.972: INFO: Pod "webserver-deployment-6676bcd6d4-v6qtl" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-v6qtl webserver-deployment-6676bcd6d4- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-6676bcd6d4-v6qtl efd9826b-f4c1-493a-bbe1-5fb28dd76839 17235360 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5eee6968-0bd5-4d6f-b445-99340fbf5328 0xc003fe8fa7 0xc003fe8fa8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5eee6968-0bd5-4d6f-b445-99340fbf5328\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.972: INFO: Pod "webserver-deployment-84855cf797-2qj5c" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2qj5c webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-2qj5c f22fc247-ffac-4dc2-b12c-fc3cbac1df6f 17235194 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9177 0xc003fe9178}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.67,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ce76b0a9bd929e8ddf421a58777cf2396ee8241c0233a92a8dcf965f4cf9bbd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.973: INFO: Pod "webserver-deployment-84855cf797-55z2b" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-55z2b webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-55z2b 5c5aafcd-9301-4296-95ab-8d433c398141 17235400 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9327 0xc003fe9328}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.973: INFO: Pod "webserver-deployment-84855cf797-56hz2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-56hz2 webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-56hz2 ac316644-a293-4e80-9e32-091104d67200 17235387 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe94b7 0xc003fe94b8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.973: INFO: Pod "webserver-deployment-84855cf797-7h2dt" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7h2dt webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-7h2dt 90669b29-9cef-4895-8136-5926c60b9f0b 17235404 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9647 0xc003fe9648}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.973: INFO: Pod "webserver-deployment-84855cf797-bktmp" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bktmp webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-bktmp e89ed72a-b30c-4820-ad3b-fa5cdcf48b68 17235158 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe97d7 0xc003fe97d8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.66,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9df20ca7b00680e5377303863ef2812d2b6aa30fc6d40335c2cf708fb5cd7112,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.973: INFO: Pod "webserver-deployment-84855cf797-ch8n8" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ch8n8 webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-ch8n8 8583d324-e5b4-42e6-b5c0-7fc014b113ab 17235349 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9987 0xc003fe9988}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.974: INFO: Pod "webserver-deployment-84855cf797-cjz29" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cjz29 webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-cjz29 976d8b5d-cc33-4579-8253-cdf5b2c7680d 17235205 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9b17 0xc003fe9b18}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.115,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c084d0f6d44d0a29180e6e858c0b8ad94684a62daa71a778fc3babadf7b2b44c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.974: INFO: Pod "webserver-deployment-84855cf797-dbh49" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dbh49 webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-dbh49 74166dc1-a5e8-4db0-80ea-0ce26a2b0d2b 17235374 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9cc7 0xc003fe9cc8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.974: INFO: Pod "webserver-deployment-84855cf797-dcrnq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dcrnq webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-dcrnq 9ad064c5-5ed2-4bca-87fb-4f8740abcc49 17235364 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9e57 0xc003fe9e58}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.974: INFO: Pod "webserver-deployment-84855cf797-fbq5l" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fbq5l webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-fbq5l 13af97df-f51a-4419-9f7d-b88a3b740391 17235160 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fe9fe7 0xc003fe9fe8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.113\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.113,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://94a7779467d7bc7858de58b322824d0393f167b889ebe45e56e57c315049175a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.975: INFO: Pod "webserver-deployment-84855cf797-fjnnp" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fjnnp webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-fjnnp 535434f2-7c84-4e47-af12-12f72915f6f3 17235114 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbc1a7 0xc003fbc1a8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.63\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.63,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://39a1a409cf4eb3629fb1d6324ec9b93f96ec8011db2b2cd9a599071446d6619e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.975: INFO: Pod "webserver-deployment-84855cf797-gzq5d" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gzq5d webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-gzq5d 982b4e0a-e4ad-45e8-9e2f-28dd74ab1e82 17235134 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbc357 0xc003fbc358}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.112,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e6960a3ad2e759945d9de4d9b969acd15bf1df2a9b4bf115ceca8332738cb01a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.975: INFO: Pod "webserver-deployment-84855cf797-hgnvw" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hgnvw webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-hgnvw a6ef38a5-5fe0-4b13-ae5e-d27439582c44 17235393 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbc507 0xc003fbc508}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.975: INFO: Pod "webserver-deployment-84855cf797-j7jcq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-j7jcq webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-j7jcq 52583692-b031-4c9b-b1de-6dd743ad2222 17235369 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbc697 0xc003fbc698}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.975: INFO: Pod "webserver-deployment-84855cf797-nkfb4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nkfb4 webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-nkfb4 635ea5bb-634b-4aef-ae88-8be1eefefca2 17235145 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbc827 0xc003fbc828}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.65,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6c1d2a7766ca36e0debde275b76cd9ea4dcc54ba33d6abf7bef926c68b9a75fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.976: INFO: Pod "webserver-deployment-84855cf797-pxrgr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-pxrgr webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-pxrgr e0cf7562-4433-4aed-a1ad-bd542d91a5b6 17235335 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbc9d7 0xc003fbc9d8}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.976: INFO: Pod "webserver-deployment-84855cf797-qw89w" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qw89w webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-qw89w 1bd05f48-e0da-4adb-9d0c-7a697e5362b5 17235355 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbcb77 0xc003fbcb78}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-06-30 23:48:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.976: INFO: Pod "webserver-deployment-84855cf797-rg6rd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rg6rd webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-rg6rd ce361ad7-4919-48be-b825-f2e03dd93ce7 17235385 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbcd07 0xc003fbcd08}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.977: INFO: Pod "webserver-deployment-84855cf797-tctvq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-tctvq webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-tctvq 6ef520ff-9016-47ae-87c6-889fbf7d5a7b 17235131 0 2020-06-30 23:48:35 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbce97 0xc003fbce98}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.64,StartTime:2020-06-30 23:48:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-30 23:48:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5c4e7f5f468a5ee9ede09e4ec514322aa64657bd8dce6adb30df8865da9f45e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jun 30 23:48:55.977: INFO: Pod "webserver-deployment-84855cf797-wg58j" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wg58j webserver-deployment-84855cf797- deployment-174 /api/v1/namespaces/deployment-174/pods/webserver-deployment-84855cf797-wg58j c43b38bc-5de1-478e-a95d-f95713b3eca3 17235353 0 2020-06-30 23:48:52 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1ace2eb9-6dbc-4a31-a04d-fb803e5af96a 0xc003fbd047 0xc003fbd048}] [] [{kube-controller-manager Update v1 2020-06-30 23:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ace2eb9-6dbc-4a31-a04d-fb803e5af96a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-30 23:48:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pm2pp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pm2pp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pm2pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-30 23:48:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-06-30 23:48:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:48:55.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-174" for this suite. • [SLOW TEST:21.534 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":294,"completed":42,"skipped":522,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:48:56.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-0ead1a2a-3a48-4cff-a653-f9b1bfca0d94 STEP: Creating a pod to test consume secrets Jun 30 23:48:58.271: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d" in namespace "projected-6433" to be "Succeeded or Failed" Jun 30 23:48:58.277: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.136053ms Jun 30 23:49:00.345: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073891207s Jun 30 23:49:02.477: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205130829s Jun 30 23:49:04.883: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.611778835s Jun 30 23:49:06.939: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667599663s Jun 30 23:49:08.960: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.688680916s Jun 30 23:49:11.099: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.827621628s STEP: Saw pod success Jun 30 23:49:11.099: INFO: Pod "pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d" satisfied condition "Succeeded or Failed" Jun 30 23:49:11.113: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d container projected-secret-volume-test: STEP: delete the pod Jun 30 23:49:11.254: INFO: Waiting for pod pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d to disappear Jun 30 23:49:11.286: INFO: Pod pod-projected-secrets-9c0a1ef8-7d22-437f-9d31-8c2924e1d92d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:49:11.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6433" for this suite. • [SLOW TEST:14.471 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":43,"skipped":523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:49:11.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1824.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1824.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 30 23:49:23.489: INFO: DNS probes using dns-1824/dns-test-a1b28ae4-06a5-4e68-bf59-18c81e20f903 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:49:23.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1824" for this suite. • [SLOW TEST:12.224 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":294,"completed":44,"skipped":573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:49:23.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-9c4b8c18-3c12-42de-a8cf-ac9622398a09 STEP: Creating configMap with name cm-test-opt-upd-571040df-26a0-495c-87a2-fba61d2b30e5 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9c4b8c18-3c12-42de-a8cf-ac9622398a09 STEP: Updating configmap cm-test-opt-upd-571040df-26a0-495c-87a2-fba61d2b30e5 STEP: Creating configMap with name cm-test-opt-create-b0cee519-2157-44fe-a45b-ae06de5172d1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:49:34.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4967" for this suite. • [SLOW TEST:10.918 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":45,"skipped":596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:49:34.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9255/configmap-test-5957cecd-0445-4dac-91ff-a9001b2586f1 STEP: Creating a pod to test consume configMaps Jun 30 23:49:34.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d" in namespace "configmap-9255" to be "Succeeded or Failed" Jun 30 23:49:34.604: INFO: Pod "pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.265384ms Jun 30 23:49:36.608: INFO: Pod "pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012982244s Jun 30 23:49:38.613: INFO: Pod "pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d": Phase="Running", Reason="", readiness=true. Elapsed: 4.017854902s Jun 30 23:49:40.617: INFO: Pod "pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022414348s STEP: Saw pod success Jun 30 23:49:40.617: INFO: Pod "pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d" satisfied condition "Succeeded or Failed" Jun 30 23:49:40.622: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d container env-test: STEP: delete the pod Jun 30 23:49:40.784: INFO: Waiting for pod pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d to disappear Jun 30 23:49:40.825: INFO: Pod pod-configmaps-bd00d16d-9eb6-472a-95da-a081c5ff5c4d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:49:40.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9255" for this suite. • [SLOW TEST:6.399 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":46,"skipped":623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:49:40.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4995c0a4-73ee-458f-bb9b-1817c2f17e68 STEP: Creating a pod to test consume secrets Jun 30 23:49:41.495: INFO: Waiting up to 5m0s for pod "pod-secrets-e5225d76-72d3-4084-888b-16565e784718" in namespace "secrets-4681" to be "Succeeded or Failed" Jun 30 23:49:41.542: INFO: Pod "pod-secrets-e5225d76-72d3-4084-888b-16565e784718": Phase="Pending", Reason="", readiness=false. Elapsed: 46.506492ms Jun 30 23:49:43.548: INFO: Pod "pod-secrets-e5225d76-72d3-4084-888b-16565e784718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052013162s Jun 30 23:49:45.552: INFO: Pod "pod-secrets-e5225d76-72d3-4084-888b-16565e784718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056922085s STEP: Saw pod success Jun 30 23:49:45.552: INFO: Pod "pod-secrets-e5225d76-72d3-4084-888b-16565e784718" satisfied condition "Succeeded or Failed" Jun 30 23:49:45.556: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e5225d76-72d3-4084-888b-16565e784718 container secret-volume-test: STEP: delete the pod Jun 30 23:49:45.636: INFO: Waiting for pod pod-secrets-e5225d76-72d3-4084-888b-16565e784718 to disappear Jun 30 23:49:45.652: INFO: Pod pod-secrets-e5225d76-72d3-4084-888b-16565e784718 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:49:45.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4681" for this suite. STEP: Destroying namespace "secret-namespace-6374" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":294,"completed":47,"skipped":648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:49:45.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-3b848396-a5c1-4e37-8ab5-a7cc45df36b3 in namespace container-probe-778 Jun 30 23:49:49.806: INFO: Started pod busybox-3b848396-a5c1-4e37-8ab5-a7cc45df36b3 in namespace container-probe-778 STEP: checking the pod's current state and verifying that restartCount is present Jun 30 23:49:49.809: INFO: Initial restart count of pod busybox-3b848396-a5c1-4e37-8ab5-a7cc45df36b3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:53:50.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-778" for this suite. • [SLOW TEST:244.874 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":48,"skipped":679,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:53:50.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Jun 30 23:53:50.660: INFO: Waiting up to 5m0s for pod "client-containers-54ebb92a-77d7-43f5-803f-459701d54a30" in namespace "containers-9874" to be "Succeeded or Failed" Jun 30 23:53:50.680: INFO: Pod "client-containers-54ebb92a-77d7-43f5-803f-459701d54a30": Phase="Pending", Reason="", readiness=false. Elapsed: 20.086113ms Jun 30 23:53:52.870: INFO: Pod "client-containers-54ebb92a-77d7-43f5-803f-459701d54a30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210180955s Jun 30 23:53:54.875: INFO: Pod "client-containers-54ebb92a-77d7-43f5-803f-459701d54a30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214384613s STEP: Saw pod success Jun 30 23:53:54.875: INFO: Pod "client-containers-54ebb92a-77d7-43f5-803f-459701d54a30" satisfied condition "Succeeded or Failed" Jun 30 23:53:54.877: INFO: Trying to get logs from node latest-worker2 pod client-containers-54ebb92a-77d7-43f5-803f-459701d54a30 container test-container: STEP: delete the pod Jun 30 23:53:54.941: INFO: Waiting for pod client-containers-54ebb92a-77d7-43f5-803f-459701d54a30 to disappear Jun 30 23:53:54.956: INFO: Pod client-containers-54ebb92a-77d7-43f5-803f-459701d54a30 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:53:54.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9874" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":294,"completed":49,"skipped":693,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:53:54.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Jun 30 23:53:55.090: INFO: Waiting up to 5m0s for pod "var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656" in namespace "var-expansion-9660" to be "Succeeded or Failed" Jun 30 23:53:55.109: INFO: Pod "var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656": Phase="Pending", Reason="", readiness=false. Elapsed: 18.432214ms Jun 30 23:53:57.113: INFO: Pod "var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022730513s Jun 30 23:53:59.118: INFO: Pod "var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027559167s STEP: Saw pod success Jun 30 23:53:59.118: INFO: Pod "var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656" satisfied condition "Succeeded or Failed" Jun 30 23:53:59.121: INFO: Trying to get logs from node latest-worker2 pod var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656 container dapi-container: STEP: delete the pod Jun 30 23:53:59.138: INFO: Waiting for pod var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656 to disappear Jun 30 23:53:59.142: INFO: Pod var-expansion-8b93f8fb-6ffd-4680-af60-1d0bb7dd3656 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:53:59.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9660" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":294,"completed":50,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:53:59.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:16.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3220" for this suite. • [SLOW TEST:17.163 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":294,"completed":51,"skipped":729,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:16.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-c0e9af37-b9f9-4612-9f24-42d4b7caf337 STEP: Creating a pod to test consume secrets Jun 30 23:54:16.389: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd" in namespace "projected-3849" to be "Succeeded or Failed" Jun 30 23:54:16.449: INFO: Pod "pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 59.581447ms Jun 30 23:54:18.502: INFO: Pod "pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112681132s Jun 30 23:54:20.507: INFO: Pod "pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117102306s STEP: Saw pod success Jun 30 23:54:20.507: INFO: Pod "pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd" satisfied condition "Succeeded or Failed" Jun 30 23:54:20.510: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd container projected-secret-volume-test: STEP: delete the pod Jun 30 23:54:20.737: INFO: Waiting for pod pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd to disappear Jun 30 23:54:20.742: INFO: Pod pod-projected-secrets-7c06d597-3c13-4c85-8cee-906c433c6ccd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:20.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3849" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":52,"skipped":729,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:20.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:24.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6235" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":53,"skipped":731,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:24.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 30 23:54:29.550: INFO: Successfully updated pod "annotationupdatee85d57d2-b7cc-4f3e-8859-fa22ae791c25" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:33.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9636" for this suite. • [SLOW TEST:8.657 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":54,"skipped":733,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:33.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 30 23:54:33.660: INFO: Waiting up to 5m0s for pod "pod-559d278f-8bc3-4041-938b-04a8db115f71" in namespace "emptydir-2380" to be "Succeeded or Failed" Jun 30 23:54:33.663: INFO: Pod "pod-559d278f-8bc3-4041-938b-04a8db115f71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694966ms Jun 30 23:54:35.667: INFO: Pod "pod-559d278f-8bc3-4041-938b-04a8db115f71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006906433s Jun 30 23:54:37.670: INFO: Pod "pod-559d278f-8bc3-4041-938b-04a8db115f71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010400517s STEP: Saw pod success Jun 30 23:54:37.670: INFO: Pod "pod-559d278f-8bc3-4041-938b-04a8db115f71" satisfied condition "Succeeded or Failed" Jun 30 23:54:37.673: INFO: Trying to get logs from node latest-worker2 pod pod-559d278f-8bc3-4041-938b-04a8db115f71 container test-container: STEP: delete the pod Jun 30 23:54:37.717: INFO: Waiting for pod pod-559d278f-8bc3-4041-938b-04a8db115f71 to disappear Jun 30 23:54:37.730: INFO: Pod pod-559d278f-8bc3-4041-938b-04a8db115f71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:37.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2380" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":55,"skipped":743,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:37.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jun 30 23:54:38.933: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jun 30 23:54:40.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158078, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158078, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158079, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158078, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 30 23:54:42.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158078, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158078, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158079, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158078, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jun 30 23:54:46.027: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:54:46.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4179-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:47.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4664" for this suite. STEP: Destroying namespace "webhook-4664-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.541 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":294,"completed":56,"skipped":751,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:47.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:47.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8474" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":294,"completed":57,"skipped":772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:47.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 30 23:54:47.656: INFO: Waiting up to 5m0s for pod "pod-7110bfb0-f939-4168-91a6-a3216bcfd98e" in namespace "emptydir-7099" to be "Succeeded or Failed" Jun 30 23:54:47.658: INFO: Pod "pod-7110bfb0-f939-4168-91a6-a3216bcfd98e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.977354ms Jun 30 23:54:49.662: INFO: Pod "pod-7110bfb0-f939-4168-91a6-a3216bcfd98e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006055717s Jun 30 23:54:51.667: INFO: Pod "pod-7110bfb0-f939-4168-91a6-a3216bcfd98e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010995637s STEP: Saw pod success Jun 30 23:54:51.667: INFO: Pod "pod-7110bfb0-f939-4168-91a6-a3216bcfd98e" satisfied condition "Succeeded or Failed" Jun 30 23:54:51.670: INFO: Trying to get logs from node latest-worker pod pod-7110bfb0-f939-4168-91a6-a3216bcfd98e container test-container: STEP: delete the pod Jun 30 23:54:51.900: INFO: Waiting for pod pod-7110bfb0-f939-4168-91a6-a3216bcfd98e to disappear Jun 30 23:54:51.970: INFO: Pod pod-7110bfb0-f939-4168-91a6-a3216bcfd98e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:51.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7099" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":58,"skipped":802,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:51.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 30 23:54:52.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a" in namespace "downward-api-4318" to be "Succeeded or Failed" Jun 30 23:54:52.156: INFO: Pod "downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.838752ms Jun 30 23:54:54.160: INFO: Pod "downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042291442s Jun 30 23:54:56.164: INFO: Pod "downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046620624s STEP: Saw pod success Jun 30 23:54:56.164: INFO: Pod "downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a" satisfied condition "Succeeded or Failed" Jun 30 23:54:56.168: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a container client-container: STEP: delete the pod Jun 30 23:54:56.193: INFO: Waiting for pod downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a to disappear Jun 30 23:54:56.198: INFO: Pod downwardapi-volume-dc1c8ac6-dcc9-4145-bd42-12c31c6da70a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:54:56.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4318" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":59,"skipped":822,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:54:56.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Jun 30 23:54:56.338: INFO: Waiting up to 5m0s for pod "var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac" in namespace "var-expansion-6468" to be "Succeeded or Failed" Jun 30 23:54:56.342: INFO: Pod "var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.556703ms Jun 30 23:54:58.346: INFO: Pod "var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007887347s Jun 30 23:55:00.350: INFO: Pod "var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac": Phase="Running", Reason="", readiness=true. Elapsed: 4.011511324s Jun 30 23:55:02.354: INFO: Pod "var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015617299s STEP: Saw pod success Jun 30 23:55:02.354: INFO: Pod "var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac" satisfied condition "Succeeded or Failed" Jun 30 23:55:02.357: INFO: Trying to get logs from node latest-worker pod var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac container dapi-container: STEP: delete the pod Jun 30 23:55:02.439: INFO: Waiting for pod var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac to disappear Jun 30 23:55:02.449: INFO: Pod var-expansion-ce07d919-5553-441f-a3ae-b330ed5528ac no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:55:02.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6468" for this suite. • [SLOW TEST:6.189 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":294,"completed":60,"skipped":825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:55:02.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:85 Jun 30 23:55:02.570: INFO: Waiting up to 1m0s for all nodes to be ready Jun 30 23:56:02.596: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Jun 30 23:56:02.614: INFO: Created pod: pod0-sched-preemption-low-priority Jun 30 23:56:02.679: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:56:14.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1783" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:75 • [SLOW TEST:72.671 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":294,"completed":61,"skipped":861,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:56:15.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-73410c39-7005-48ed-b71c-ee707276f123 STEP: Creating a pod to test consume configMaps Jun 30 23:56:15.283: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212" in namespace "configmap-7024" to be "Succeeded or Failed" Jun 30 23:56:15.306: INFO: Pod "pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212": Phase="Pending", Reason="", readiness=false. Elapsed: 22.147498ms Jun 30 23:56:17.312: INFO: Pod "pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028486715s Jun 30 23:56:19.316: INFO: Pod "pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032994187s Jun 30 23:56:21.320: INFO: Pod "pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036356178s STEP: Saw pod success Jun 30 23:56:21.320: INFO: Pod "pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212" satisfied condition "Succeeded or Failed" Jun 30 23:56:21.322: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212 container configmap-volume-test: STEP: delete the pod Jun 30 23:56:21.353: INFO: Waiting for pod pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212 to disappear Jun 30 23:56:21.361: INFO: Pod pod-configmaps-0b518e97-086a-47b8-a02d-b9364a486212 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:56:21.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7024" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":62,"skipped":861,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:56:21.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:56:21.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5298" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":294,"completed":63,"skipped":872,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:56:21.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:56:21.610: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 30 23:56:21.624: INFO: Number of nodes with available pods: 0 Jun 30 23:56:21.624: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 30 23:56:21.742: INFO: Number of nodes with available pods: 0 Jun 30 23:56:21.742: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:22.822: INFO: Number of nodes with available pods: 0 Jun 30 23:56:22.822: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:23.747: INFO: Number of nodes with available pods: 0 Jun 30 23:56:23.747: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:24.745: INFO: Number of nodes with available pods: 1 Jun 30 23:56:24.745: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 30 23:56:24.810: INFO: Number of nodes with available pods: 1 Jun 30 23:56:24.810: INFO: Number of running nodes: 0, number of available pods: 1 Jun 30 23:56:25.815: INFO: Number of nodes with available pods: 0 Jun 30 23:56:25.815: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 30 23:56:25.829: INFO: Number of nodes with available pods: 0 Jun 30 23:56:25.829: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:27.044: INFO: Number of nodes with available pods: 0 Jun 30 23:56:27.044: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:27.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:27.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:28.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:28.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:29.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:29.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:30.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:30.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:31.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:31.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:32.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:32.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:33.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:33.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:34.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:34.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:35.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:35.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:36.834: INFO: Number of nodes with available pods: 0 Jun 30 23:56:36.834: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:37.835: INFO: Number of nodes with available pods: 0 Jun 30 23:56:37.835: INFO: Node latest-worker2 is running more than one daemon pod Jun 30 23:56:38.833: INFO: Number of nodes with available pods: 1 Jun 30 23:56:38.833: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5539, will wait for the garbage collector to delete the pods Jun 30 23:56:38.900: INFO: Deleting DaemonSet.extensions daemon-set took: 6.927324ms Jun 30 23:56:39.200: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272745ms Jun 30 23:56:43.805: INFO: Number of nodes with available pods: 0 Jun 30 23:56:43.805: INFO: Number of running nodes: 0, number of available pods: 0 Jun 30 23:56:43.808: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5539/daemonsets","resourceVersion":"17237684"},"items":null} Jun 30 23:56:43.823: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5539/pods","resourceVersion":"17237684"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:56:43.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5539" for this suite. • [SLOW TEST:22.435 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":294,"completed":64,"skipped":872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:56:43.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jun 30 23:56:43.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9" in namespace "downward-api-2708" to be "Succeeded or Failed" Jun 30 23:56:43.967: INFO: Pod "downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.279293ms Jun 30 23:56:45.986: INFO: Pod "downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034800783s Jun 30 23:56:48.139: INFO: Pod "downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187903131s STEP: Saw pod success Jun 30 23:56:48.139: INFO: Pod "downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9" satisfied condition "Succeeded or Failed" Jun 30 23:56:48.142: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9 container client-container: STEP: delete the pod Jun 30 23:56:48.202: INFO: Waiting for pod downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9 to disappear Jun 30 23:56:48.206: INFO: Pod downwardapi-volume-754db847-4ebc-45b5-bd12-30234f2594a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:56:48.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2708" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":65,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:56:48.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jun 30 23:56:52.821: INFO: Successfully updated pod "labelsupdate1dbc4855-0002-4a00-bc0c-641fdd33fc78" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:56:54.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-297" for this suite. • [SLOW TEST:6.649 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":66,"skipped":931,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:56:54.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:85 Jun 30 23:56:55.040: INFO: Waiting up to 1m0s for all nodes to be ready Jun 30 23:57:55.064: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:57:55.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:484 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 30 23:57:59.196: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jun 30 23:58:13.364: INFO: pods created so far: [1 1 1] Jun 30 23:58:13.364: INFO: length of pods created so far: 3 Jun 30 23:58:25.391: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:58:32.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4294" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:456 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:58:32.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-832" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:75 • [SLOW TEST:97.776 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:445 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":294,"completed":67,"skipped":951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:58:32.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f8c836e5-7ad8-498b-87c1-8a525c6f420d STEP: Creating a pod to test consume secrets Jun 30 23:58:32.730: INFO: Waiting up to 5m0s for pod "pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f" in namespace "secrets-638" to be "Succeeded or Failed" Jun 30 23:58:32.742: INFO: Pod "pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.26896ms Jun 30 23:58:34.754: INFO: Pod "pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024169031s Jun 30 23:58:36.757: INFO: Pod "pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026885491s STEP: Saw pod success Jun 30 23:58:36.757: INFO: Pod "pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f" satisfied condition "Succeeded or Failed" Jun 30 23:58:36.778: INFO: Trying to get logs from node latest-worker pod pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f container secret-volume-test: STEP: delete the pod Jun 30 23:58:36.838: INFO: Waiting for pod pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f to disappear Jun 30 23:58:36.850: INFO: Pod pod-secrets-230805d8-69fd-49f2-bb7d-0f98f52c539f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 30 23:58:36.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-638" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":68,"skipped":975,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 30 23:58:36.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7621405d-f0bb-4cf7-80cf-6d92182a73c2 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7621405d-f0bb-4cf7-80cf-6d92182a73c2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:00:05.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8703" for this suite. • [SLOW TEST:88.537 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":69,"skipped":976,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:00:05.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:00:06.027: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jul 1 00:00:08.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158406, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158406, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158406, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158406, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:00:11.055: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:00:11.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6703" for this suite. STEP: Destroying namespace "webhook-6703-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.424 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":294,"completed":70,"skipped":978,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:00:11.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:00:43.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4216" for this suite. STEP: Destroying namespace "nsdeletetest-9777" for this suite. Jul 1 00:00:43.170: INFO: Namespace nsdeletetest-9777 was already deleted STEP: Destroying namespace "nsdeletetest-1651" for this suite. • [SLOW TEST:31.355 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":294,"completed":71,"skipped":984,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:00:43.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Jul 1 00:00:43.270: INFO: namespace kubectl-7265 Jul 1 00:00:43.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7265' Jul 1 00:00:46.573: INFO: stderr: "" Jul 1 00:00:46.573: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 1 00:00:47.578: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 00:00:47.578: INFO: Found 0 / 1 Jul 1 00:00:48.675: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 00:00:48.675: INFO: Found 0 / 1 Jul 1 00:00:49.590: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 00:00:49.590: INFO: Found 0 / 1 Jul 1 00:00:50.579: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 00:00:50.579: INFO: Found 1 / 1 Jul 1 00:00:50.579: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 00:00:50.582: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 00:00:50.582: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 00:00:50.582: INFO: wait on agnhost-master startup in kubectl-7265 Jul 1 00:00:50.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-hmrf8 agnhost-master --namespace=kubectl-7265' Jul 1 00:00:50.706: INFO: stderr: "" Jul 1 00:00:50.706: INFO: stdout: "Paused\n" STEP: exposing RC Jul 1 00:00:50.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7265' Jul 1 00:00:50.883: INFO: stderr: "" Jul 1 00:00:50.883: INFO: stdout: "service/rm2 exposed\n" Jul 1 00:00:50.890: INFO: Service rm2 in namespace kubectl-7265 found. STEP: exposing service Jul 1 00:00:52.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7265' Jul 1 00:00:53.044: INFO: stderr: "" Jul 1 00:00:53.044: INFO: stdout: "service/rm3 exposed\n" Jul 1 00:00:53.051: INFO: Service rm3 in namespace kubectl-7265 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:00:55.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7265" for this suite. • [SLOW TEST:11.893 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":294,"completed":72,"skipped":992,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:00:55.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:00:55.151: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:00:59.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6449" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":294,"completed":73,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:00:59.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928 STEP: updating the pod Jul 1 00:01:07.843: INFO: Successfully updated pod "var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928" STEP: waiting for pod and container restart STEP: Failing liveness probe Jul 1 00:01:07.874: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-9253 PodName:var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:01:07.874: INFO: >>> kubeConfig: /root/.kube/config I0701 00:01:07.943715 8 log.go:172] (0xc004044790) (0xc001543220) Create stream I0701 00:01:07.943742 8 log.go:172] (0xc004044790) (0xc001543220) Stream added, broadcasting: 1 I0701 00:01:07.945911 8 log.go:172] (0xc004044790) Reply frame received for 1 I0701 00:01:07.945946 8 log.go:172] (0xc004044790) (0xc001468dc0) Create stream I0701 00:01:07.945959 8 log.go:172] (0xc004044790) (0xc001468dc0) Stream added, broadcasting: 3 I0701 00:01:07.946714 8 log.go:172] (0xc004044790) Reply frame received for 3 I0701 00:01:07.946742 8 log.go:172] (0xc004044790) (0xc001468e60) Create stream I0701 00:01:07.946754 8 log.go:172] (0xc004044790) (0xc001468e60) Stream added, broadcasting: 5 I0701 00:01:07.947479 8 log.go:172] (0xc004044790) Reply frame received for 5 I0701 00:01:08.014485 8 log.go:172] (0xc004044790) Data frame received for 3 I0701 00:01:08.014552 8 log.go:172] (0xc001468dc0) (3) Data frame handling I0701 00:01:08.014602 8 log.go:172] (0xc004044790) Data frame received for 5 I0701 00:01:08.014630 8 log.go:172] (0xc001468e60) (5) Data frame handling I0701 00:01:08.016374 8 log.go:172] (0xc004044790) Data frame received for 1 I0701 00:01:08.016404 8 log.go:172] (0xc001543220) (1) Data frame handling I0701 00:01:08.016424 8 log.go:172] (0xc001543220) (1) Data frame sent I0701 00:01:08.016438 8 log.go:172] (0xc004044790) (0xc001543220) Stream removed, broadcasting: 1 I0701 00:01:08.016461 8 log.go:172] (0xc004044790) Go away received I0701 00:01:08.016564 8 log.go:172] (0xc004044790) (0xc001543220) Stream removed, broadcasting: 1 I0701 00:01:08.016593 8 log.go:172] (0xc004044790) (0xc001468dc0) Stream removed, broadcasting: 3 I0701 00:01:08.016606 8 log.go:172] (0xc004044790) (0xc001468e60) Stream removed, broadcasting: 5 Jul 1 00:01:08.016: INFO: Pod exec output: / STEP: Waiting for container to restart Jul 1 00:01:08.020: INFO: Container dapi-container, restarts: 0 Jul 1 00:01:18.027: INFO: Container dapi-container, restarts: 0 Jul 1 00:01:28.025: INFO: Container dapi-container, restarts: 0 Jul 1 00:01:38.025: INFO: Container dapi-container, restarts: 0 Jul 1 00:01:48.026: INFO: Container dapi-container, restarts: 1 Jul 1 00:01:48.026: INFO: Container has restart count: 1 STEP: Rewriting the file Jul 1 00:01:48.026: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-9253 PodName:var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:01:48.026: INFO: >>> kubeConfig: /root/.kube/config I0701 00:01:48.065511 8 log.go:172] (0xc002adc000) (0xc001c5ee60) Create stream I0701 00:01:48.065548 8 log.go:172] (0xc002adc000) (0xc001c5ee60) Stream added, broadcasting: 1 I0701 00:01:48.067374 8 log.go:172] (0xc002adc000) Reply frame received for 1 I0701 00:01:48.067413 8 log.go:172] (0xc002adc000) (0xc000d5a460) Create stream I0701 00:01:48.067423 8 log.go:172] (0xc002adc000) (0xc000d5a460) Stream added, broadcasting: 3 I0701 00:01:48.068199 8 log.go:172] (0xc002adc000) Reply frame received for 3 I0701 00:01:48.068254 8 log.go:172] (0xc002adc000) (0xc0014c4d20) Create stream I0701 00:01:48.068281 8 log.go:172] (0xc002adc000) (0xc0014c4d20) Stream added, broadcasting: 5 I0701 00:01:48.069224 8 log.go:172] (0xc002adc000) Reply frame received for 5 I0701 00:01:48.133961 8 log.go:172] (0xc002adc000) Data frame received for 5 I0701 00:01:48.134027 8 log.go:172] (0xc0014c4d20) (5) Data frame handling I0701 00:01:48.134066 8 log.go:172] (0xc002adc000) Data frame received for 3 I0701 00:01:48.134089 8 log.go:172] (0xc000d5a460) (3) Data frame handling I0701 00:01:48.135564 8 log.go:172] (0xc002adc000) Data frame received for 1 I0701 00:01:48.135633 8 log.go:172] (0xc001c5ee60) (1) Data frame handling I0701 00:01:48.135667 8 log.go:172] (0xc001c5ee60) (1) Data frame sent I0701 00:01:48.135685 8 log.go:172] (0xc002adc000) (0xc001c5ee60) Stream removed, broadcasting: 1 I0701 00:01:48.135706 8 log.go:172] (0xc002adc000) Go away received I0701 00:01:48.135881 8 log.go:172] (0xc002adc000) (0xc001c5ee60) Stream removed, broadcasting: 1 I0701 00:01:48.135925 8 log.go:172] (0xc002adc000) (0xc000d5a460) Stream removed, broadcasting: 3 I0701 00:01:48.135954 8 log.go:172] (0xc002adc000) (0xc0014c4d20) Stream removed, broadcasting: 5 Jul 1 00:01:48.135: INFO: Exec stderr: "" Jul 1 00:01:48.135: INFO: Pod exec output: STEP: Waiting for container to stop restarting Jul 1 00:02:16.145: INFO: Container has restart count: 2 Jul 1 00:03:18.143: INFO: Container restart has stabilized STEP: test for subpath mounted with old value Jul 1 00:03:18.146: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-9253 PodName:var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:03:18.146: INFO: >>> kubeConfig: /root/.kube/config I0701 00:03:18.184128 8 log.go:172] (0xc002e3e0b0) (0xc0025ff860) Create stream I0701 00:03:18.184160 8 log.go:172] (0xc002e3e0b0) (0xc0025ff860) Stream added, broadcasting: 1 I0701 00:03:18.186315 8 log.go:172] (0xc002e3e0b0) Reply frame received for 1 I0701 00:03:18.186368 8 log.go:172] (0xc002e3e0b0) (0xc000e900a0) Create stream I0701 00:03:18.186391 8 log.go:172] (0xc002e3e0b0) (0xc000e900a0) Stream added, broadcasting: 3 I0701 00:03:18.187608 8 log.go:172] (0xc002e3e0b0) Reply frame received for 3 I0701 00:03:18.187649 8 log.go:172] (0xc002e3e0b0) (0xc0025ffa40) Create stream I0701 00:03:18.187664 8 log.go:172] (0xc002e3e0b0) (0xc0025ffa40) Stream added, broadcasting: 5 I0701 00:03:18.189053 8 log.go:172] (0xc002e3e0b0) Reply frame received for 5 I0701 00:03:18.252074 8 log.go:172] (0xc002e3e0b0) Data frame received for 5 I0701 00:03:18.252110 8 log.go:172] (0xc0025ffa40) (5) Data frame handling I0701 00:03:18.252147 8 log.go:172] (0xc002e3e0b0) Data frame received for 3 I0701 00:03:18.252182 8 log.go:172] (0xc000e900a0) (3) Data frame handling I0701 00:03:18.254187 8 log.go:172] (0xc002e3e0b0) Data frame received for 1 I0701 00:03:18.254224 8 log.go:172] (0xc0025ff860) (1) Data frame handling I0701 00:03:18.254256 8 log.go:172] (0xc0025ff860) (1) Data frame sent I0701 00:03:18.254287 8 log.go:172] (0xc002e3e0b0) (0xc0025ff860) Stream removed, broadcasting: 1 I0701 00:03:18.254419 8 log.go:172] (0xc002e3e0b0) Go away received I0701 00:03:18.254496 8 log.go:172] (0xc002e3e0b0) (0xc0025ff860) Stream removed, broadcasting: 1 I0701 00:03:18.254547 8 log.go:172] (0xc002e3e0b0) (0xc000e900a0) Stream removed, broadcasting: 3 I0701 00:03:18.254579 8 log.go:172] (0xc002e3e0b0) (0xc0025ffa40) Stream removed, broadcasting: 5 Jul 1 00:03:18.258: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-9253 PodName:var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:03:18.258: INFO: >>> kubeConfig: /root/.kube/config I0701 00:03:18.290760 8 log.go:172] (0xc001eb5600) (0xc001398be0) Create stream I0701 00:03:18.290791 8 log.go:172] (0xc001eb5600) (0xc001398be0) Stream added, broadcasting: 1 I0701 00:03:18.293008 8 log.go:172] (0xc001eb5600) Reply frame received for 1 I0701 00:03:18.293050 8 log.go:172] (0xc001eb5600) (0xc001398c80) Create stream I0701 00:03:18.293066 8 log.go:172] (0xc001eb5600) (0xc001398c80) Stream added, broadcasting: 3 I0701 00:03:18.294208 8 log.go:172] (0xc001eb5600) Reply frame received for 3 I0701 00:03:18.294247 8 log.go:172] (0xc001eb5600) (0xc0021d2140) Create stream I0701 00:03:18.294261 8 log.go:172] (0xc001eb5600) (0xc0021d2140) Stream added, broadcasting: 5 I0701 00:03:18.295133 8 log.go:172] (0xc001eb5600) Reply frame received for 5 I0701 00:03:18.369035 8 log.go:172] (0xc001eb5600) Data frame received for 3 I0701 00:03:18.369068 8 log.go:172] (0xc001398c80) (3) Data frame handling I0701 00:03:18.369098 8 log.go:172] (0xc001eb5600) Data frame received for 5 I0701 00:03:18.369306 8 log.go:172] (0xc0021d2140) (5) Data frame handling I0701 00:03:18.370260 8 log.go:172] (0xc001eb5600) Data frame received for 1 I0701 00:03:18.370280 8 log.go:172] (0xc001398be0) (1) Data frame handling I0701 00:03:18.370293 8 log.go:172] (0xc001398be0) (1) Data frame sent I0701 00:03:18.370320 8 log.go:172] (0xc001eb5600) (0xc001398be0) Stream removed, broadcasting: 1 I0701 00:03:18.370357 8 log.go:172] (0xc001eb5600) Go away received I0701 00:03:18.370435 8 log.go:172] (0xc001eb5600) (0xc001398be0) Stream removed, broadcasting: 1 I0701 00:03:18.370454 8 log.go:172] (0xc001eb5600) (0xc001398c80) Stream removed, broadcasting: 3 I0701 00:03:18.370462 8 log.go:172] (0xc001eb5600) (0xc0021d2140) Stream removed, broadcasting: 5 Jul 1 00:03:18.370: INFO: Deleting pod "var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928" in namespace "var-expansion-9253" Jul 1 00:03:18.383: INFO: Wait up to 5m0s for pod "var-expansion-50e6b9a5-e51c-42a1-825d-61a1f015f928" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:03:56.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9253" for this suite. • [SLOW TEST:177.240 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":294,"completed":74,"skipped":1035,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:03:56.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 1 00:04:04.647: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 00:04:04.667: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 00:04:06.668: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 00:04:06.672: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 00:04:08.668: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 00:04:08.672: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 00:04:10.668: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 00:04:10.673: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 00:04:12.668: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 00:04:12.673: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 00:04:14.668: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 00:04:14.672: INFO: Pod pod-with-poststart-http-hook still exists Jul 1 00:04:16.668: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 1 00:04:16.672: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:04:16.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-184" for this suite. • [SLOW TEST:20.243 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":294,"completed":75,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:04:16.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1316 STEP: creating the pod Jul 1 00:04:16.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3379' Jul 1 00:04:18.565: INFO: stderr: "" Jul 1 00:04:18.565: INFO: stdout: "pod/pause created\n" Jul 1 00:04:18.565: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 1 00:04:18.565: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3379" to be "running and ready" Jul 1 00:04:18.570: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.799482ms Jul 1 00:04:20.574: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009484871s Jul 1 00:04:22.579: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.013696185s Jul 1 00:04:22.579: INFO: Pod "pause" satisfied condition "running and ready" Jul 1 00:04:22.579: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Jul 1 00:04:22.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3379' Jul 1 00:04:22.686: INFO: stderr: "" Jul 1 00:04:22.686: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 1 00:04:22.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3379' Jul 1 00:04:22.799: INFO: stderr: "" Jul 1 00:04:22.799: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 1 00:04:22.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3379' Jul 1 00:04:22.912: INFO: stderr: "" Jul 1 00:04:22.912: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 1 00:04:22.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3379' Jul 1 00:04:23.012: INFO: stderr: "" Jul 1 00:04:23.012: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1323 STEP: using delete to clean up resources Jul 1 00:04:23.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3379' Jul 1 00:04:23.180: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 00:04:23.180: INFO: stdout: "pod \"pause\" force deleted\n" Jul 1 00:04:23.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3379' Jul 1 00:04:23.548: INFO: stderr: "No resources found in kubectl-3379 namespace.\n" Jul 1 00:04:23.548: INFO: stdout: "" Jul 1 00:04:23.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3379 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 00:04:23.639: INFO: stderr: "" Jul 1 00:04:23.639: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:04:23.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3379" for this suite. • [SLOW TEST:6.984 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1313 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":294,"completed":76,"skipped":1104,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:04:23.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 1 00:04:23.964: INFO: Waiting up to 5m0s for pod "pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72" in namespace "emptydir-5720" to be "Succeeded or Failed" Jul 1 00:04:24.149: INFO: Pod "pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72": Phase="Pending", Reason="", readiness=false. Elapsed: 184.601979ms Jul 1 00:04:26.239: INFO: Pod "pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275491262s Jul 1 00:04:28.244: INFO: Pod "pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72": Phase="Running", Reason="", readiness=true. Elapsed: 4.279716582s Jul 1 00:04:30.249: INFO: Pod "pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284615688s STEP: Saw pod success Jul 1 00:04:30.249: INFO: Pod "pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72" satisfied condition "Succeeded or Failed" Jul 1 00:04:30.252: INFO: Trying to get logs from node latest-worker2 pod pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72 container test-container: STEP: delete the pod Jul 1 00:04:30.335: INFO: Waiting for pod pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72 to disappear Jul 1 00:04:30.340: INFO: Pod pod-3cc071da-eb1f-42d3-a1f1-ba82ada82b72 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:04:30.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5720" for this suite. • [SLOW TEST:6.680 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":77,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:04:30.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 00:04:30.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5312' Jul 1 00:04:30.520: INFO: stderr: "" Jul 1 00:04:30.520: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jul 1 00:04:35.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5312 -o json' Jul 1 00:04:35.676: INFO: stderr: "" Jul 1 00:04:35.676: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-01T00:04:30Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-01T00:04:30Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.154\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-01T00:04:33Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5312\",\n \"resourceVersion\": \"17239691\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5312/pods/e2e-test-httpd-pod\",\n \"uid\": \"f16ff09b-b455-4839-9633-db945fb251b2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-nr7kk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-nr7kk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-nr7kk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:04:30Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:04:33Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:04:33Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:04:30Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://6d97e497c9148981ebadff72feda139cd4c6680fd5d219c8023f03b4638222c6\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-01T00:04:33Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.154\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.154\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-01T00:04:30Z\"\n }\n}\n" STEP: replace the image in the pod Jul 1 00:04:35.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5312' Jul 1 00:04:36.493: INFO: stderr: "" Jul 1 00:04:36.493: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1569 Jul 1 00:04:36.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5312' Jul 1 00:04:39.755: INFO: stderr: "" Jul 1 00:04:39.755: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:04:39.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5312" for this suite. • [SLOW TEST:9.414 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1560 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":294,"completed":78,"skipped":1140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:04:39.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:04:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-18" for this suite. • [SLOW TEST:16.228 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":294,"completed":79,"skipped":1185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:04:55.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:06:56.102: INFO: Deleting pod "var-expansion-72d7999c-8fd7-4f5c-8f07-5bd10b83f74c" in namespace "var-expansion-4938" Jul 1 00:06:56.108: INFO: Wait up to 5m0s for pod "var-expansion-72d7999c-8fd7-4f5c-8f07-5bd10b83f74c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:06:58.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4938" for this suite. • [SLOW TEST:122.155 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":294,"completed":80,"skipped":1217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:06:58.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:06:58.268: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:06:59.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8779" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":294,"completed":81,"skipped":1250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:06:59.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jul 1 00:06:59.532: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:07:16.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2626" for this suite. • [SLOW TEST:16.750 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":294,"completed":82,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:07:16.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-5375 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5375 STEP: Deleting pre-stop pod Jul 1 00:07:29.450: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:07:29.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5375" for this suite. • [SLOW TEST:13.261 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":294,"completed":83,"skipped":1305,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:07:29.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:07:29.929: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 1 00:07:34.932: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 00:07:34.932: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 1 00:07:36.936: INFO: Creating deployment "test-rollover-deployment" Jul 1 00:07:36.963: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 1 00:07:38.989: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 1 00:07:38.995: INFO: Ensure that both replica sets have 1 created replica Jul 1 00:07:39.001: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 1 00:07:39.008: INFO: Updating deployment test-rollover-deployment Jul 1 00:07:39.008: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 1 00:07:41.047: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 1 00:07:41.055: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 1 00:07:41.062: INFO: all replica sets need to contain the pod-template-hash label Jul 1 00:07:41.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158859, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158856, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:07:43.070: INFO: all replica sets need to contain the pod-template-hash label Jul 1 00:07:43.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158863, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158856, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:07:45.070: INFO: all replica sets need to contain the pod-template-hash label Jul 1 00:07:45.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158863, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158856, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:07:47.069: INFO: all replica sets need to contain the pod-template-hash label Jul 1 00:07:47.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158863, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158856, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:07:49.071: INFO: all replica sets need to contain the pod-template-hash label Jul 1 00:07:49.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158863, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158856, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:07:51.072: INFO: all replica sets need to contain the pod-template-hash label Jul 1 00:07:51.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158863, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158856, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:07:53.138: INFO: Jul 1 00:07:53.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158857, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158863, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729158856, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:07:55.069: INFO: Jul 1 00:07:55.069: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 1 00:07:55.076: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-241 /apis/apps/v1/namespaces/deployment-241/deployments/test-rollover-deployment b49bacda-378f-4089-9f0d-29db39db68f4 17240525 2 2020-07-01 00:07:36 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-01 00:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-01 00:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004093f68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-01 00:07:37 +0000 UTC,LastTransitionTime:2020-07-01 00:07:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-07-01 00:07:53 +0000 UTC,LastTransitionTime:2020-07-01 00:07:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 1 00:07:55.079: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-241 /apis/apps/v1/namespaces/deployment-241/replicasets/test-rollover-deployment-7c4fd9c879 8b677b06-8bb3-4779-afc3-62bf0cababae 17240514 2 2020-07-01 00:07:39 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b49bacda-378f-4089-9f0d-29db39db68f4 0xc004194627 0xc004194628}] [] [{kube-controller-manager Update apps/v1 2020-07-01 00:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49bacda-378f-4089-9f0d-29db39db68f4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004194748 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 1 00:07:55.079: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 1 00:07:55.079: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-241 /apis/apps/v1/namespaces/deployment-241/replicasets/test-rollover-controller ac4ace34-a675-4684-9624-6c32d67c0ab7 17240524 2 2020-07-01 00:07:29 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b49bacda-378f-4089-9f0d-29db39db68f4 0xc0041943ef 0xc004194400}] [] [{e2e.test Update apps/v1 2020-07-01 00:07:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-01 00:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49bacda-378f-4089-9f0d-29db39db68f4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041944b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 00:07:55.079: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-241 /apis/apps/v1/namespaces/deployment-241/replicasets/test-rollover-deployment-5686c4cfd5 e07ae2f8-6208-4556-9ce3-a510f3f4c560 17240461 2 2020-07-01 00:07:36 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b49bacda-378f-4089-9f0d-29db39db68f4 0xc004194527 0xc004194528}] [] [{kube-controller-manager Update apps/v1 2020-07-01 00:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b49bacda-378f-4089-9f0d-29db39db68f4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041945b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 00:07:55.082: INFO: Pod "test-rollover-deployment-7c4fd9c879-nw9nl" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-nw9nl test-rollover-deployment-7c4fd9c879- deployment-241 /api/v1/namespaces/deployment-241/pods/test-rollover-deployment-7c4fd9c879-nw9nl 24797fcc-ed2b-4786-906f-ae3335bff338 17240482 0 2020-07-01 00:07:39 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 8b677b06-8bb3-4779-afc3-62bf0cababae 0xc004194dc7 0xc004194dc8}] [] [{kube-controller-manager Update v1 2020-07-01 00:07:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b677b06-8bb3-4779-afc3-62bf0cababae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-01 00:07:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2nmv4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2nmv4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2nmv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:07:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.98,StartTime:2020-07-01 00:07:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 00:07:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://781057db2283a5b98ad1095b06028987631d1296aee7c051207d164a09726791,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:07:55.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-241" for this suite. • [SLOW TEST:25.597 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":294,"completed":84,"skipped":1308,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:07:55.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:08:06.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8919" for this suite. • [SLOW TEST:11.586 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":294,"completed":85,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:08:06.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3662 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jul 1 00:08:06.818: INFO: Found 0 stateful pods, waiting for 3 Jul 1 00:08:16.847: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:08:16.847: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:08:16.847: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 00:08:26.823: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:08:26.823: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:08:26.823: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 1 00:08:26.850: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jul 1 00:08:36.924: INFO: Updating stateful set ss2 Jul 1 00:08:36.986: INFO: Waiting for Pod statefulset-3662/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jul 1 00:08:48.177: INFO: Found 2 stateful pods, waiting for 3 Jul 1 00:08:58.183: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:08:58.183: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:08:58.183: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jul 1 00:08:58.208: INFO: Updating stateful set ss2 Jul 1 00:08:58.255: INFO: Waiting for Pod statefulset-3662/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 00:09:08.281: INFO: Updating stateful set ss2 Jul 1 00:09:08.319: INFO: Waiting for StatefulSet statefulset-3662/ss2 to complete update Jul 1 00:09:08.319: INFO: Waiting for Pod statefulset-3662/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 00:09:18.335: INFO: Waiting for StatefulSet statefulset-3662/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 1 00:09:28.327: INFO: Deleting all statefulset in ns statefulset-3662 Jul 1 00:09:28.330: INFO: Scaling statefulset ss2 to 0 Jul 1 00:09:48.348: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 00:09:48.350: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:09:48.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3662" for this suite. • [SLOW TEST:101.722 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":294,"completed":86,"skipped":1336,"failed":0} SSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:09:48.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:09:48.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5404" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":294,"completed":87,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:09:48.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 1 00:09:55.238: INFO: Successfully updated pod "adopt-release-g2k8g" STEP: Checking that the Job readopts the Pod Jul 1 00:09:55.238: INFO: Waiting up to 15m0s for pod "adopt-release-g2k8g" in namespace "job-2902" to be "adopted" Jul 1 00:09:55.242: INFO: Pod "adopt-release-g2k8g": Phase="Running", Reason="", readiness=true. Elapsed: 3.890347ms Jul 1 00:09:57.245: INFO: Pod "adopt-release-g2k8g": Phase="Running", Reason="", readiness=true. Elapsed: 2.007603117s Jul 1 00:09:57.245: INFO: Pod "adopt-release-g2k8g" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 1 00:09:57.752: INFO: Successfully updated pod "adopt-release-g2k8g" STEP: Checking that the Job releases the Pod Jul 1 00:09:57.753: INFO: Waiting up to 15m0s for pod "adopt-release-g2k8g" in namespace "job-2902" to be "released" Jul 1 00:09:57.764: INFO: Pod "adopt-release-g2k8g": Phase="Running", Reason="", readiness=true. Elapsed: 11.084253ms Jul 1 00:09:59.870: INFO: Pod "adopt-release-g2k8g": Phase="Running", Reason="", readiness=true. Elapsed: 2.117633777s Jul 1 00:09:59.870: INFO: Pod "adopt-release-g2k8g" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:09:59.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2902" for this suite. • [SLOW TEST:11.250 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":294,"completed":88,"skipped":1373,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:09:59.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-5839/secret-test-69c5037b-a864-4b92-b5e6-55df3a26efbe STEP: Creating a pod to test consume secrets Jul 1 00:10:00.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5" in namespace "secrets-5839" to be "Succeeded or Failed" Jul 1 00:10:00.057: INFO: Pod "pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.72206ms Jul 1 00:10:02.134: INFO: Pod "pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125416553s Jul 1 00:10:04.139: INFO: Pod "pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129985484s STEP: Saw pod success Jul 1 00:10:04.139: INFO: Pod "pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5" satisfied condition "Succeeded or Failed" Jul 1 00:10:04.142: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5 container env-test: STEP: delete the pod Jul 1 00:10:04.193: INFO: Waiting for pod pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5 to disappear Jul 1 00:10:04.207: INFO: Pod pod-configmaps-62dc04c5-b9dc-4dca-a2df-0e96c0ab3fd5 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:10:04.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5839" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":294,"completed":89,"skipped":1389,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:10:04.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:10:04.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f" in namespace "projected-7046" to be "Succeeded or Failed" Jul 1 00:10:04.382: INFO: Pod "downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.15393ms Jul 1 00:10:06.386: INFO: Pod "downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032521194s Jul 1 00:10:08.390: INFO: Pod "downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036136767s STEP: Saw pod success Jul 1 00:10:08.390: INFO: Pod "downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f" satisfied condition "Succeeded or Failed" Jul 1 00:10:08.393: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f container client-container: STEP: delete the pod Jul 1 00:10:08.465: INFO: Waiting for pod downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f to disappear Jul 1 00:10:08.475: INFO: Pod downwardapi-volume-51300cc9-19c9-41c9-934a-9c23fe4b243f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:10:08.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7046" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":90,"skipped":1397,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:10:08.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:10:08.585: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 1 00:10:08.606: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:08.619: INFO: Number of nodes with available pods: 0 Jul 1 00:10:08.619: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:10:09.625: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:09.629: INFO: Number of nodes with available pods: 0 Jul 1 00:10:09.629: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:10:10.625: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:10.629: INFO: Number of nodes with available pods: 0 Jul 1 00:10:10.629: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:10:11.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:11.678: INFO: Number of nodes with available pods: 0 Jul 1 00:10:11.678: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:10:12.624: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:12.628: INFO: Number of nodes with available pods: 2 Jul 1 00:10:12.628: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 1 00:10:12.710: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:12.710: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:12.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:13.755: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:13.755: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:13.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:14.755: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:14.755: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:14.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:15.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:15.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:15.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:16.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:16.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:16.756: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:16.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:17.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:17.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:17.756: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:17.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:18.755: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:18.755: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:18.755: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:18.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:19.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:19.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:19.756: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:19.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:20.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:20.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:20.756: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:20.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:21.755: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:21.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:21.756: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:21.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:22.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:22.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:22.756: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:22.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:23.754: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:23.754: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:23.754: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:23.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:24.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:24.756: INFO: Wrong image for pod: daemon-set-rz7xt. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:24.756: INFO: Pod daemon-set-rz7xt is not available Jul 1 00:10:24.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:25.755: INFO: Pod daemon-set-gr62x is not available Jul 1 00:10:25.755: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:25.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:26.755: INFO: Pod daemon-set-gr62x is not available Jul 1 00:10:26.755: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:26.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:27.756: INFO: Pod daemon-set-gr62x is not available Jul 1 00:10:27.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:27.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:28.877: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:28.880: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:29.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:29.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:30.756: INFO: Wrong image for pod: daemon-set-j2ffs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. Jul 1 00:10:30.756: INFO: Pod daemon-set-j2ffs is not available Jul 1 00:10:30.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:31.756: INFO: Pod daemon-set-q6hm8 is not available Jul 1 00:10:31.760: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 1 00:10:31.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:31.767: INFO: Number of nodes with available pods: 1 Jul 1 00:10:31.767: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:10:32.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:32.776: INFO: Number of nodes with available pods: 1 Jul 1 00:10:32.776: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:10:33.788: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:33.791: INFO: Number of nodes with available pods: 1 Jul 1 00:10:33.791: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:10:34.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:10:34.776: INFO: Number of nodes with available pods: 2 Jul 1 00:10:34.776: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7678, will wait for the garbage collector to delete the pods Jul 1 00:10:34.847: INFO: Deleting DaemonSet.extensions daemon-set took: 5.564959ms Jul 1 00:10:35.148: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27441ms Jul 1 00:10:45.351: INFO: Number of nodes with available pods: 0 Jul 1 00:10:45.351: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 00:10:45.354: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7678/daemonsets","resourceVersion":"17241584"},"items":null} Jul 1 00:10:45.355: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7678/pods","resourceVersion":"17241584"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:10:45.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7678" for this suite. • [SLOW TEST:36.913 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":294,"completed":91,"skipped":1411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:10:45.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:10:45.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jul 1 00:10:46.042: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T00:10:46Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T00:10:46Z]] name:name1 resourceVersion:17241598 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d095559e-160c-4331-8d6c-0b95b22a5037] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jul 1 00:10:56.048: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T00:10:56Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T00:10:56Z]] name:name2 resourceVersion:17241658 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:eb0b7f65-fad7-4383-b926-8dcb68438253] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jul 1 00:11:06.056: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T00:10:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T00:11:06Z]] name:name1 resourceVersion:17241688 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d095559e-160c-4331-8d6c-0b95b22a5037] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jul 1 00:11:16.063: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T00:10:56Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T00:11:16Z]] name:name2 resourceVersion:17241718 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:eb0b7f65-fad7-4383-b926-8dcb68438253] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jul 1 00:11:26.072: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T00:10:46Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T00:11:06Z]] name:name1 resourceVersion:17241750 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d095559e-160c-4331-8d6c-0b95b22a5037] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jul 1 00:11:36.082: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-01T00:10:56Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-01T00:11:16Z]] name:name2 resourceVersion:17241778 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:eb0b7f65-fad7-4383-b926-8dcb68438253] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:11:46.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5537" for this suite. • [SLOW TEST:61.205 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":294,"completed":92,"skipped":1458,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:11:46.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 1 00:11:46.700: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 00:11:46.716: INFO: Waiting for terminating namespaces to be deleted... Jul 1 00:11:46.719: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 1 00:11:46.725: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.725: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jul 1 00:11:46.725: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.725: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jul 1 00:11:46.725: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.725: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:11:46.725: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.725: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:11:46.725: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 1 00:11:46.730: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.730: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jul 1 00:11:46.730: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.730: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jul 1 00:11:46.730: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.730: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:11:46.730: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:11:46.730: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161d7879953fde6d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.161d787997dece1f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:11:47.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6647" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":294,"completed":93,"skipped":1461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:11:47.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:11:47.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' Jul 1 00:11:48.005: INFO: stderr: "" Jul 1 00:11:48.005: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.1.98+60b800358f7784\", GitCommit:\"60b800358f77848c4fac5376796e8a82b9039eb4\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:34:27Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:11:48.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4744" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":294,"completed":94,"skipped":1557,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:11:48.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-4a6f7a40-a658-4e79-a917-dc6f9bae2885 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4a6f7a40-a658-4e79-a917-dc6f9bae2885 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:11:54.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-138" for this suite. • [SLOW TEST:6.136 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":95,"skipped":1558,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:11:54.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0701 00:12:04.347705 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:12:04.347: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:12:04.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4917" for this suite. • [SLOW TEST:10.204 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":294,"completed":96,"skipped":1580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:12:04.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 00:12:04.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4181' Jul 1 00:12:07.519: INFO: stderr: "" Jul 1 00:12:07.519: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1533 Jul 1 00:12:07.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4181' Jul 1 00:12:14.860: INFO: stderr: "" Jul 1 00:12:14.861: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:12:14.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4181" for this suite. • [SLOW TEST:10.535 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":294,"completed":97,"skipped":1659,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:12:14.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:12:31.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-779" for this suite. • [SLOW TEST:16.307 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":294,"completed":98,"skipped":1664,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:12:31.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:12:47.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1704" for this suite. • [SLOW TEST:16.332 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":294,"completed":99,"skipped":1674,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:12:47.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Jul 1 00:12:52.145: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7915 pod-service-account-6c7cba80-f6cb-45a0-ba4f-52fde3c4bde3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 1 00:12:52.360: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7915 pod-service-account-6c7cba80-f6cb-45a0-ba4f-52fde3c4bde3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 1 00:12:52.548: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7915 pod-service-account-6c7cba80-f6cb-45a0-ba4f-52fde3c4bde3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:12:52.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7915" for this suite. • [SLOW TEST:5.285 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":294,"completed":100,"skipped":1676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:12:52.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7182 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7182 STEP: Creating statefulset with conflicting port in namespace statefulset-7182 STEP: Waiting until pod test-pod will start running in namespace statefulset-7182 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7182 Jul 1 00:12:57.023: INFO: Observed stateful pod in namespace: statefulset-7182, name: ss-0, uid: f80b2926-488b-4a1c-9315-28fce04bb7b4, status phase: Pending. Waiting for statefulset controller to delete. Jul 1 00:12:57.155: INFO: Observed stateful pod in namespace: statefulset-7182, name: ss-0, uid: f80b2926-488b-4a1c-9315-28fce04bb7b4, status phase: Failed. Waiting for statefulset controller to delete. Jul 1 00:12:57.250: INFO: Observed stateful pod in namespace: statefulset-7182, name: ss-0, uid: f80b2926-488b-4a1c-9315-28fce04bb7b4, status phase: Failed. Waiting for statefulset controller to delete. Jul 1 00:12:57.307: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7182 STEP: Removing pod with conflicting port in namespace statefulset-7182 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7182 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 1 00:13:03.485: INFO: Deleting all statefulset in ns statefulset-7182 Jul 1 00:13:03.488: INFO: Scaling statefulset ss to 0 Jul 1 00:13:13.518: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 00:13:13.522: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:13:13.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7182" for this suite. • [SLOW TEST:20.734 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":294,"completed":101,"skipped":1759,"failed":0} [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:13:13.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:13:13.673: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 1 00:13:13.684: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 1 00:13:18.703: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 1 00:13:18.703: INFO: Creating deployment "test-rolling-update-deployment" Jul 1 00:13:18.715: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 1 00:13:18.794: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 1 00:13:20.800: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 1 00:13:20.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159198, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159198, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159198, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159198, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:13:22.807: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Jul 1 00:13:22.815: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6920 /apis/apps/v1/namespaces/deployment-6920/deployments/test-rolling-update-deployment 97fed28a-bfaf-4117-b74a-86083ad275e7 17242536 1 2020-07-01 00:13:18 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-07-01 00:13:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-01 00:13:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fbca58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-01 00:13:18 +0000 UTC,LastTransitionTime:2020-07-01 00:13:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-07-01 00:13:22 +0000 UTC,LastTransitionTime:2020-07-01 00:13:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 1 00:13:22.818: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-6920 /apis/apps/v1/namespaces/deployment-6920/replicasets/test-rolling-update-deployment-df7bb669b 112b5099-698b-42f9-9012-81c4a116a620 17242522 1 2020-07-01 00:13:18 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 97fed28a-bfaf-4117-b74a-86083ad275e7 0xc003fbd000 0xc003fbd001}] [] [{kube-controller-manager Update apps/v1 2020-07-01 00:13:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97fed28a-bfaf-4117-b74a-86083ad275e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fbd088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 1 00:13:22.818: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 1 00:13:22.818: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6920 /apis/apps/v1/namespaces/deployment-6920/replicasets/test-rolling-update-controller 87d08b7d-8628-447c-a6ef-fef4adee6501 17242535 2 2020-07-01 00:13:13 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 97fed28a-bfaf-4117-b74a-86083ad275e7 0xc003fbcebf 0xc003fbced0}] [] [{e2e.test Update apps/v1 2020-07-01 00:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-07-01 00:13:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97fed28a-bfaf-4117-b74a-86083ad275e7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003fbcf88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 1 00:13:22.822: INFO: Pod "test-rolling-update-deployment-df7bb669b-s85k8" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-s85k8 test-rolling-update-deployment-df7bb669b- deployment-6920 /api/v1/namespaces/deployment-6920/pods/test-rolling-update-deployment-df7bb669b-s85k8 de0416ed-852c-41ec-aa1f-2bcb0a93c0a3 17242521 0 2020-07-01 00:13:18 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 112b5099-698b-42f9-9012-81c4a116a620 0xc003fbd5b0 0xc003fbd5b1}] [] [{kube-controller-manager Update v1 2020-07-01 00:13:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"112b5099-698b-42f9-9012-81c4a116a620\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-01 00:13:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.111\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hnkhc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hnkhc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hnkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:13:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:13:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:13:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:13:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.111,StartTime:2020-07-01 00:13:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 00:13:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://4b7ab2a7cfc7412dd030dfc37da40c96154662b00f964ae0aa0d804b18820b76,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.111,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:13:22.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6920" for this suite. • [SLOW TEST:9.279 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":294,"completed":102,"skipped":1759,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:13:22.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-459.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-459.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-459.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-459.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-459.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-459.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-459.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.212_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-459.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-459.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-459.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-459.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-459.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-459.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-459.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-459.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.212_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 00:13:31.371: INFO: Unable to read wheezy_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.410: INFO: Unable to read wheezy_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.414: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.470: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.494: INFO: Unable to read jessie_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.501: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.504: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:31.525: INFO: Lookups using dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9 failed for: [wheezy_udp@dns-test-service.dns-459.svc.cluster.local wheezy_tcp@dns-test-service.dns-459.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_udp@dns-test-service.dns-459.svc.cluster.local jessie_tcp@dns-test-service.dns-459.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local] Jul 1 00:13:36.531: INFO: Unable to read wheezy_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.535: INFO: Unable to read wheezy_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.539: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.542: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.587: INFO: Unable to read jessie_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.594: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.597: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:36.617: INFO: Lookups using dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9 failed for: [wheezy_udp@dns-test-service.dns-459.svc.cluster.local wheezy_tcp@dns-test-service.dns-459.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_udp@dns-test-service.dns-459.svc.cluster.local jessie_tcp@dns-test-service.dns-459.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local] Jul 1 00:13:41.530: INFO: Unable to read wheezy_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.534: INFO: Unable to read wheezy_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.542: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.544: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.569: INFO: Unable to read jessie_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.572: INFO: Unable to read jessie_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.575: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.578: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:41.598: INFO: Lookups using dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9 failed for: [wheezy_udp@dns-test-service.dns-459.svc.cluster.local wheezy_tcp@dns-test-service.dns-459.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_udp@dns-test-service.dns-459.svc.cluster.local jessie_tcp@dns-test-service.dns-459.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local] Jul 1 00:13:46.531: INFO: Unable to read wheezy_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.534: INFO: Unable to read wheezy_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.538: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.542: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.577: INFO: Unable to read jessie_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.581: INFO: Unable to read jessie_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.584: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.588: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:46.607: INFO: Lookups using dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9 failed for: [wheezy_udp@dns-test-service.dns-459.svc.cluster.local wheezy_tcp@dns-test-service.dns-459.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_udp@dns-test-service.dns-459.svc.cluster.local jessie_tcp@dns-test-service.dns-459.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local] Jul 1 00:13:51.530: INFO: Unable to read wheezy_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.534: INFO: Unable to read wheezy_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.538: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.541: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.565: INFO: Unable to read jessie_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.568: INFO: Unable to read jessie_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.572: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.576: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:51.602: INFO: Lookups using dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9 failed for: [wheezy_udp@dns-test-service.dns-459.svc.cluster.local wheezy_tcp@dns-test-service.dns-459.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_udp@dns-test-service.dns-459.svc.cluster.local jessie_tcp@dns-test-service.dns-459.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local] Jul 1 00:13:56.530: INFO: Unable to read wheezy_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.534: INFO: Unable to read wheezy_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.538: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.540: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.559: INFO: Unable to read jessie_udp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.561: INFO: Unable to read jessie_tcp@dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.563: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.566: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local from pod dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9: the server could not find the requested resource (get pods dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9) Jul 1 00:13:56.582: INFO: Lookups using dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9 failed for: [wheezy_udp@dns-test-service.dns-459.svc.cluster.local wheezy_tcp@dns-test-service.dns-459.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_udp@dns-test-service.dns-459.svc.cluster.local jessie_tcp@dns-test-service.dns-459.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-459.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-459.svc.cluster.local] Jul 1 00:14:01.598: INFO: DNS probes using dns-459/dns-test-db83567a-bf51-4f2c-8f4a-fa447ee397c9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:14:02.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-459" for this suite. • [SLOW TEST:39.753 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":294,"completed":103,"skipped":1765,"failed":0} [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:14:02.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:14:02.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584" in namespace "downward-api-769" to be "Succeeded or Failed" Jul 1 00:14:02.775: INFO: Pod "downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584": Phase="Pending", Reason="", readiness=false. Elapsed: 12.043616ms Jul 1 00:14:04.824: INFO: Pod "downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060832259s Jul 1 00:14:06.896: INFO: Pod "downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132723965s Jul 1 00:14:08.900: INFO: Pod "downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136287209s STEP: Saw pod success Jul 1 00:14:08.900: INFO: Pod "downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584" satisfied condition "Succeeded or Failed" Jul 1 00:14:08.903: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584 container client-container: STEP: delete the pod Jul 1 00:14:08.959: INFO: Waiting for pod downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584 to disappear Jul 1 00:14:08.991: INFO: Pod downwardapi-volume-d664447c-2d87-4a0a-8cfe-75db46489584 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:14:08.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-769" for this suite. • [SLOW TEST:6.416 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":104,"skipped":1765,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:14:09.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a167c1fb-95c0-4012-8566-fdbdd8a134ef STEP: Creating a pod to test consume configMaps Jul 1 00:14:09.072: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded" in namespace "projected-3625" to be "Succeeded or Failed" Jul 1 00:14:09.089: INFO: Pod "pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded": Phase="Pending", Reason="", readiness=false. Elapsed: 17.053974ms Jul 1 00:14:11.094: INFO: Pod "pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021635603s Jul 1 00:14:13.097: INFO: Pod "pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025570856s STEP: Saw pod success Jul 1 00:14:13.098: INFO: Pod "pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded" satisfied condition "Succeeded or Failed" Jul 1 00:14:13.100: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded container projected-configmap-volume-test: STEP: delete the pod Jul 1 00:14:13.141: INFO: Waiting for pod pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded to disappear Jul 1 00:14:13.171: INFO: Pod pod-projected-configmaps-506b0441-56e5-4d36-9bee-30952d424ded no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:14:13.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3625" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":105,"skipped":1768,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:14:13.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Jul 1 00:14:13.263: INFO: Waiting up to 5m0s for pod "client-containers-948bcb00-c65b-44c1-a815-1536b1e99562" in namespace "containers-6252" to be "Succeeded or Failed" Jul 1 00:14:13.315: INFO: Pod "client-containers-948bcb00-c65b-44c1-a815-1536b1e99562": Phase="Pending", Reason="", readiness=false. Elapsed: 51.298679ms Jul 1 00:14:15.319: INFO: Pod "client-containers-948bcb00-c65b-44c1-a815-1536b1e99562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056190411s Jul 1 00:14:17.324: INFO: Pod "client-containers-948bcb00-c65b-44c1-a815-1536b1e99562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061226635s STEP: Saw pod success Jul 1 00:14:17.325: INFO: Pod "client-containers-948bcb00-c65b-44c1-a815-1536b1e99562" satisfied condition "Succeeded or Failed" Jul 1 00:14:17.328: INFO: Trying to get logs from node latest-worker pod client-containers-948bcb00-c65b-44c1-a815-1536b1e99562 container test-container: STEP: delete the pod Jul 1 00:14:17.508: INFO: Waiting for pod client-containers-948bcb00-c65b-44c1-a815-1536b1e99562 to disappear Jul 1 00:14:17.536: INFO: Pod client-containers-948bcb00-c65b-44c1-a815-1536b1e99562 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:14:17.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6252" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":294,"completed":106,"skipped":1775,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:14:17.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:14:18.254: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:14:20.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159258, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159258, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159258, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159258, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:14:23.339: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:14:35.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2092" for this suite. STEP: Destroying namespace "webhook-2092-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.173 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":294,"completed":107,"skipped":1797,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:14:35.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 1 00:14:45.825: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:45.825: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:45.860839 8 log.go:172] (0xc002860370) (0xc000f66d20) Create stream I0701 00:14:45.860870 8 log.go:172] (0xc002860370) (0xc000f66d20) Stream added, broadcasting: 1 I0701 00:14:45.862692 8 log.go:172] (0xc002860370) Reply frame received for 1 I0701 00:14:45.862736 8 log.go:172] (0xc002860370) (0xc0026dce60) Create stream I0701 00:14:45.862748 8 log.go:172] (0xc002860370) (0xc0026dce60) Stream added, broadcasting: 3 I0701 00:14:45.863699 8 log.go:172] (0xc002860370) Reply frame received for 3 I0701 00:14:45.863743 8 log.go:172] (0xc002860370) (0xc0011f5360) Create stream I0701 00:14:45.863758 8 log.go:172] (0xc002860370) (0xc0011f5360) Stream added, broadcasting: 5 I0701 00:14:45.864661 8 log.go:172] (0xc002860370) Reply frame received for 5 I0701 00:14:45.931427 8 log.go:172] (0xc002860370) Data frame received for 5 I0701 00:14:45.931461 8 log.go:172] (0xc0011f5360) (5) Data frame handling I0701 00:14:45.931482 8 log.go:172] (0xc002860370) Data frame received for 3 I0701 00:14:45.931496 8 log.go:172] (0xc0026dce60) (3) Data frame handling I0701 00:14:45.931503 8 log.go:172] (0xc0026dce60) (3) Data frame sent I0701 00:14:45.931706 8 log.go:172] (0xc002860370) Data frame received for 3 I0701 00:14:45.931730 8 log.go:172] (0xc0026dce60) (3) Data frame handling I0701 00:14:45.933104 8 log.go:172] (0xc002860370) Data frame received for 1 I0701 00:14:45.933277 8 log.go:172] (0xc000f66d20) (1) Data frame handling I0701 00:14:45.933311 8 log.go:172] (0xc000f66d20) (1) Data frame sent I0701 00:14:45.933344 8 log.go:172] (0xc002860370) (0xc000f66d20) Stream removed, broadcasting: 1 I0701 00:14:45.933379 8 log.go:172] (0xc002860370) Go away received I0701 00:14:45.933513 8 log.go:172] (0xc002860370) (0xc000f66d20) Stream removed, broadcasting: 1 I0701 00:14:45.933537 8 log.go:172] (0xc002860370) (0xc0026dce60) Stream removed, broadcasting: 3 I0701 00:14:45.933548 8 log.go:172] (0xc002860370) (0xc0011f5360) Stream removed, broadcasting: 5 Jul 1 00:14:45.933: INFO: Exec stderr: "" Jul 1 00:14:45.933: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:45.933: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:45.959101 8 log.go:172] (0xc00191a9a0) (0xc0026dd0e0) Create stream I0701 00:14:45.959126 8 log.go:172] (0xc00191a9a0) (0xc0026dd0e0) Stream added, broadcasting: 1 I0701 00:14:45.961075 8 log.go:172] (0xc00191a9a0) Reply frame received for 1 I0701 00:14:45.961105 8 log.go:172] (0xc00191a9a0) (0xc0011f5540) Create stream I0701 00:14:45.961269 8 log.go:172] (0xc00191a9a0) (0xc0011f5540) Stream added, broadcasting: 3 I0701 00:14:45.962462 8 log.go:172] (0xc00191a9a0) Reply frame received for 3 I0701 00:14:45.962496 8 log.go:172] (0xc00191a9a0) (0xc001bb14a0) Create stream I0701 00:14:45.962509 8 log.go:172] (0xc00191a9a0) (0xc001bb14a0) Stream added, broadcasting: 5 I0701 00:14:45.963359 8 log.go:172] (0xc00191a9a0) Reply frame received for 5 I0701 00:14:46.033541 8 log.go:172] (0xc00191a9a0) Data frame received for 3 I0701 00:14:46.033596 8 log.go:172] (0xc0011f5540) (3) Data frame handling I0701 00:14:46.033626 8 log.go:172] (0xc0011f5540) (3) Data frame sent I0701 00:14:46.033642 8 log.go:172] (0xc00191a9a0) Data frame received for 3 I0701 00:14:46.033671 8 log.go:172] (0xc00191a9a0) Data frame received for 5 I0701 00:14:46.033711 8 log.go:172] (0xc001bb14a0) (5) Data frame handling I0701 00:14:46.033738 8 log.go:172] (0xc0011f5540) (3) Data frame handling I0701 00:14:46.035131 8 log.go:172] (0xc00191a9a0) Data frame received for 1 I0701 00:14:46.035173 8 log.go:172] (0xc0026dd0e0) (1) Data frame handling I0701 00:14:46.035223 8 log.go:172] (0xc0026dd0e0) (1) Data frame sent I0701 00:14:46.035260 8 log.go:172] (0xc00191a9a0) (0xc0026dd0e0) Stream removed, broadcasting: 1 I0701 00:14:46.035291 8 log.go:172] (0xc00191a9a0) Go away received I0701 00:14:46.035420 8 log.go:172] (0xc00191a9a0) (0xc0026dd0e0) Stream removed, broadcasting: 1 I0701 00:14:46.035450 8 log.go:172] (0xc00191a9a0) (0xc0011f5540) Stream removed, broadcasting: 3 I0701 00:14:46.035480 8 log.go:172] (0xc00191a9a0) (0xc001bb14a0) Stream removed, broadcasting: 5 Jul 1 00:14:46.035: INFO: Exec stderr: "" Jul 1 00:14:46.035: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.035: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.069436 8 log.go:172] (0xc0028609a0) (0xc000f670e0) Create stream I0701 00:14:46.069458 8 log.go:172] (0xc0028609a0) (0xc000f670e0) Stream added, broadcasting: 1 I0701 00:14:46.071468 8 log.go:172] (0xc0028609a0) Reply frame received for 1 I0701 00:14:46.071518 8 log.go:172] (0xc0028609a0) (0xc0011f5680) Create stream I0701 00:14:46.071542 8 log.go:172] (0xc0028609a0) (0xc0011f5680) Stream added, broadcasting: 3 I0701 00:14:46.072601 8 log.go:172] (0xc0028609a0) Reply frame received for 3 I0701 00:14:46.072637 8 log.go:172] (0xc0028609a0) (0xc001bb1540) Create stream I0701 00:14:46.072654 8 log.go:172] (0xc0028609a0) (0xc001bb1540) Stream added, broadcasting: 5 I0701 00:14:46.073996 8 log.go:172] (0xc0028609a0) Reply frame received for 5 I0701 00:14:46.131334 8 log.go:172] (0xc0028609a0) Data frame received for 3 I0701 00:14:46.131379 8 log.go:172] (0xc0011f5680) (3) Data frame handling I0701 00:14:46.131411 8 log.go:172] (0xc0011f5680) (3) Data frame sent I0701 00:14:46.131465 8 log.go:172] (0xc0028609a0) Data frame received for 5 I0701 00:14:46.131498 8 log.go:172] (0xc001bb1540) (5) Data frame handling I0701 00:14:46.131627 8 log.go:172] (0xc0028609a0) Data frame received for 3 I0701 00:14:46.131653 8 log.go:172] (0xc0011f5680) (3) Data frame handling I0701 00:14:46.132989 8 log.go:172] (0xc0028609a0) Data frame received for 1 I0701 00:14:46.133014 8 log.go:172] (0xc000f670e0) (1) Data frame handling I0701 00:14:46.133033 8 log.go:172] (0xc000f670e0) (1) Data frame sent I0701 00:14:46.133059 8 log.go:172] (0xc0028609a0) (0xc000f670e0) Stream removed, broadcasting: 1 I0701 00:14:46.133348 8 log.go:172] (0xc0028609a0) (0xc000f670e0) Stream removed, broadcasting: 1 I0701 00:14:46.133379 8 log.go:172] (0xc0028609a0) (0xc0011f5680) Stream removed, broadcasting: 3 I0701 00:14:46.133497 8 log.go:172] (0xc0028609a0) Go away received I0701 00:14:46.133604 8 log.go:172] (0xc0028609a0) (0xc001bb1540) Stream removed, broadcasting: 5 Jul 1 00:14:46.133: INFO: Exec stderr: "" Jul 1 00:14:46.133: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.133: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.167571 8 log.go:172] (0xc001eb5760) (0xc001bb1900) Create stream I0701 00:14:46.167607 8 log.go:172] (0xc001eb5760) (0xc001bb1900) Stream added, broadcasting: 1 I0701 00:14:46.169997 8 log.go:172] (0xc001eb5760) Reply frame received for 1 I0701 00:14:46.170027 8 log.go:172] (0xc001eb5760) (0xc001bb1a40) Create stream I0701 00:14:46.170039 8 log.go:172] (0xc001eb5760) (0xc001bb1a40) Stream added, broadcasting: 3 I0701 00:14:46.171079 8 log.go:172] (0xc001eb5760) Reply frame received for 3 I0701 00:14:46.171195 8 log.go:172] (0xc001eb5760) (0xc000f674a0) Create stream I0701 00:14:46.171206 8 log.go:172] (0xc001eb5760) (0xc000f674a0) Stream added, broadcasting: 5 I0701 00:14:46.172330 8 log.go:172] (0xc001eb5760) Reply frame received for 5 I0701 00:14:46.248402 8 log.go:172] (0xc001eb5760) Data frame received for 5 I0701 00:14:46.248443 8 log.go:172] (0xc000f674a0) (5) Data frame handling I0701 00:14:46.248468 8 log.go:172] (0xc001eb5760) Data frame received for 3 I0701 00:14:46.248487 8 log.go:172] (0xc001bb1a40) (3) Data frame handling I0701 00:14:46.248506 8 log.go:172] (0xc001bb1a40) (3) Data frame sent I0701 00:14:46.248688 8 log.go:172] (0xc001eb5760) Data frame received for 3 I0701 00:14:46.248706 8 log.go:172] (0xc001bb1a40) (3) Data frame handling I0701 00:14:46.250472 8 log.go:172] (0xc001eb5760) Data frame received for 1 I0701 00:14:46.250495 8 log.go:172] (0xc001bb1900) (1) Data frame handling I0701 00:14:46.250510 8 log.go:172] (0xc001bb1900) (1) Data frame sent I0701 00:14:46.250531 8 log.go:172] (0xc001eb5760) (0xc001bb1900) Stream removed, broadcasting: 1 I0701 00:14:46.250545 8 log.go:172] (0xc001eb5760) Go away received I0701 00:14:46.250625 8 log.go:172] (0xc001eb5760) (0xc001bb1900) Stream removed, broadcasting: 1 I0701 00:14:46.250647 8 log.go:172] (0xc001eb5760) (0xc001bb1a40) Stream removed, broadcasting: 3 I0701 00:14:46.250662 8 log.go:172] (0xc001eb5760) (0xc000f674a0) Stream removed, broadcasting: 5 Jul 1 00:14:46.250: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 1 00:14:46.250: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.250: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.287916 8 log.go:172] (0xc002860fd0) (0xc000f677c0) Create stream I0701 00:14:46.287947 8 log.go:172] (0xc002860fd0) (0xc000f677c0) Stream added, broadcasting: 1 I0701 00:14:46.290301 8 log.go:172] (0xc002860fd0) Reply frame received for 1 I0701 00:14:46.290344 8 log.go:172] (0xc002860fd0) (0xc001bb1ae0) Create stream I0701 00:14:46.290360 8 log.go:172] (0xc002860fd0) (0xc001bb1ae0) Stream added, broadcasting: 3 I0701 00:14:46.291351 8 log.go:172] (0xc002860fd0) Reply frame received for 3 I0701 00:14:46.291392 8 log.go:172] (0xc002860fd0) (0xc001bb1cc0) Create stream I0701 00:14:46.291414 8 log.go:172] (0xc002860fd0) (0xc001bb1cc0) Stream added, broadcasting: 5 I0701 00:14:46.292492 8 log.go:172] (0xc002860fd0) Reply frame received for 5 I0701 00:14:46.347794 8 log.go:172] (0xc002860fd0) Data frame received for 3 I0701 00:14:46.347827 8 log.go:172] (0xc001bb1ae0) (3) Data frame handling I0701 00:14:46.347854 8 log.go:172] (0xc001bb1ae0) (3) Data frame sent I0701 00:14:46.347868 8 log.go:172] (0xc002860fd0) Data frame received for 3 I0701 00:14:46.347880 8 log.go:172] (0xc001bb1ae0) (3) Data frame handling I0701 00:14:46.347931 8 log.go:172] (0xc002860fd0) Data frame received for 5 I0701 00:14:46.347955 8 log.go:172] (0xc001bb1cc0) (5) Data frame handling I0701 00:14:46.350017 8 log.go:172] (0xc002860fd0) Data frame received for 1 I0701 00:14:46.350042 8 log.go:172] (0xc000f677c0) (1) Data frame handling I0701 00:14:46.350077 8 log.go:172] (0xc000f677c0) (1) Data frame sent I0701 00:14:46.350104 8 log.go:172] (0xc002860fd0) (0xc000f677c0) Stream removed, broadcasting: 1 I0701 00:14:46.350190 8 log.go:172] (0xc002860fd0) (0xc000f677c0) Stream removed, broadcasting: 1 I0701 00:14:46.350204 8 log.go:172] (0xc002860fd0) (0xc001bb1ae0) Stream removed, broadcasting: 3 I0701 00:14:46.350252 8 log.go:172] (0xc002860fd0) Go away received I0701 00:14:46.350392 8 log.go:172] (0xc002860fd0) (0xc001bb1cc0) Stream removed, broadcasting: 5 Jul 1 00:14:46.350: INFO: Exec stderr: "" Jul 1 00:14:46.350: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.350: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.384637 8 log.go:172] (0xc00191afd0) (0xc0026dd360) Create stream I0701 00:14:46.384663 8 log.go:172] (0xc00191afd0) (0xc0026dd360) Stream added, broadcasting: 1 I0701 00:14:46.386717 8 log.go:172] (0xc00191afd0) Reply frame received for 1 I0701 00:14:46.386771 8 log.go:172] (0xc00191afd0) (0xc0011f5720) Create stream I0701 00:14:46.386801 8 log.go:172] (0xc00191afd0) (0xc0011f5720) Stream added, broadcasting: 3 I0701 00:14:46.387757 8 log.go:172] (0xc00191afd0) Reply frame received for 3 I0701 00:14:46.387794 8 log.go:172] (0xc00191afd0) (0xc000f67860) Create stream I0701 00:14:46.387815 8 log.go:172] (0xc00191afd0) (0xc000f67860) Stream added, broadcasting: 5 I0701 00:14:46.388703 8 log.go:172] (0xc00191afd0) Reply frame received for 5 I0701 00:14:46.458525 8 log.go:172] (0xc00191afd0) Data frame received for 5 I0701 00:14:46.458552 8 log.go:172] (0xc000f67860) (5) Data frame handling I0701 00:14:46.458571 8 log.go:172] (0xc00191afd0) Data frame received for 3 I0701 00:14:46.458582 8 log.go:172] (0xc0011f5720) (3) Data frame handling I0701 00:14:46.458598 8 log.go:172] (0xc0011f5720) (3) Data frame sent I0701 00:14:46.458612 8 log.go:172] (0xc00191afd0) Data frame received for 3 I0701 00:14:46.458619 8 log.go:172] (0xc0011f5720) (3) Data frame handling I0701 00:14:46.460138 8 log.go:172] (0xc00191afd0) Data frame received for 1 I0701 00:14:46.460182 8 log.go:172] (0xc0026dd360) (1) Data frame handling I0701 00:14:46.460220 8 log.go:172] (0xc0026dd360) (1) Data frame sent I0701 00:14:46.460240 8 log.go:172] (0xc00191afd0) (0xc0026dd360) Stream removed, broadcasting: 1 I0701 00:14:46.460262 8 log.go:172] (0xc00191afd0) Go away received I0701 00:14:46.460400 8 log.go:172] (0xc00191afd0) (0xc0026dd360) Stream removed, broadcasting: 1 I0701 00:14:46.460425 8 log.go:172] (0xc00191afd0) (0xc0011f5720) Stream removed, broadcasting: 3 I0701 00:14:46.460438 8 log.go:172] (0xc00191afd0) (0xc000f67860) Stream removed, broadcasting: 5 Jul 1 00:14:46.460: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 1 00:14:46.460: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.460: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.499201 8 log.go:172] (0xc00191b600) (0xc0026dd7c0) Create stream I0701 00:14:46.499224 8 log.go:172] (0xc00191b600) (0xc0026dd7c0) Stream added, broadcasting: 1 I0701 00:14:46.501429 8 log.go:172] (0xc00191b600) Reply frame received for 1 I0701 00:14:46.501475 8 log.go:172] (0xc00191b600) (0xc001742460) Create stream I0701 00:14:46.501494 8 log.go:172] (0xc00191b600) (0xc001742460) Stream added, broadcasting: 3 I0701 00:14:46.502436 8 log.go:172] (0xc00191b600) Reply frame received for 3 I0701 00:14:46.502466 8 log.go:172] (0xc00191b600) (0xc0026dd860) Create stream I0701 00:14:46.502479 8 log.go:172] (0xc00191b600) (0xc0026dd860) Stream added, broadcasting: 5 I0701 00:14:46.503287 8 log.go:172] (0xc00191b600) Reply frame received for 5 I0701 00:14:46.554021 8 log.go:172] (0xc00191b600) Data frame received for 3 I0701 00:14:46.554041 8 log.go:172] (0xc001742460) (3) Data frame handling I0701 00:14:46.554054 8 log.go:172] (0xc001742460) (3) Data frame sent I0701 00:14:46.554062 8 log.go:172] (0xc00191b600) Data frame received for 3 I0701 00:14:46.554083 8 log.go:172] (0xc001742460) (3) Data frame handling I0701 00:14:46.554309 8 log.go:172] (0xc00191b600) Data frame received for 5 I0701 00:14:46.554358 8 log.go:172] (0xc0026dd860) (5) Data frame handling I0701 00:14:46.555775 8 log.go:172] (0xc00191b600) Data frame received for 1 I0701 00:14:46.555787 8 log.go:172] (0xc0026dd7c0) (1) Data frame handling I0701 00:14:46.555792 8 log.go:172] (0xc0026dd7c0) (1) Data frame sent I0701 00:14:46.555800 8 log.go:172] (0xc00191b600) (0xc0026dd7c0) Stream removed, broadcasting: 1 I0701 00:14:46.555827 8 log.go:172] (0xc00191b600) Go away received I0701 00:14:46.555857 8 log.go:172] (0xc00191b600) (0xc0026dd7c0) Stream removed, broadcasting: 1 I0701 00:14:46.555866 8 log.go:172] (0xc00191b600) (0xc001742460) Stream removed, broadcasting: 3 I0701 00:14:46.555872 8 log.go:172] (0xc00191b600) (0xc0026dd860) Stream removed, broadcasting: 5 Jul 1 00:14:46.555: INFO: Exec stderr: "" Jul 1 00:14:46.555: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.555: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.586629 8 log.go:172] (0xc002861600) (0xc000f67c20) Create stream I0701 00:14:46.586653 8 log.go:172] (0xc002861600) (0xc000f67c20) Stream added, broadcasting: 1 I0701 00:14:46.588481 8 log.go:172] (0xc002861600) Reply frame received for 1 I0701 00:14:46.588514 8 log.go:172] (0xc002861600) (0xc000f67cc0) Create stream I0701 00:14:46.588526 8 log.go:172] (0xc002861600) (0xc000f67cc0) Stream added, broadcasting: 3 I0701 00:14:46.589686 8 log.go:172] (0xc002861600) Reply frame received for 3 I0701 00:14:46.589744 8 log.go:172] (0xc002861600) (0xc0026dd900) Create stream I0701 00:14:46.589763 8 log.go:172] (0xc002861600) (0xc0026dd900) Stream added, broadcasting: 5 I0701 00:14:46.590546 8 log.go:172] (0xc002861600) Reply frame received for 5 I0701 00:14:46.658476 8 log.go:172] (0xc002861600) Data frame received for 5 I0701 00:14:46.658523 8 log.go:172] (0xc0026dd900) (5) Data frame handling I0701 00:14:46.658553 8 log.go:172] (0xc002861600) Data frame received for 3 I0701 00:14:46.658568 8 log.go:172] (0xc000f67cc0) (3) Data frame handling I0701 00:14:46.658584 8 log.go:172] (0xc000f67cc0) (3) Data frame sent I0701 00:14:46.658600 8 log.go:172] (0xc002861600) Data frame received for 3 I0701 00:14:46.658619 8 log.go:172] (0xc000f67cc0) (3) Data frame handling I0701 00:14:46.660013 8 log.go:172] (0xc002861600) Data frame received for 1 I0701 00:14:46.660032 8 log.go:172] (0xc000f67c20) (1) Data frame handling I0701 00:14:46.660066 8 log.go:172] (0xc000f67c20) (1) Data frame sent I0701 00:14:46.660095 8 log.go:172] (0xc002861600) (0xc000f67c20) Stream removed, broadcasting: 1 I0701 00:14:46.660210 8 log.go:172] (0xc002861600) (0xc000f67c20) Stream removed, broadcasting: 1 I0701 00:14:46.660240 8 log.go:172] (0xc002861600) (0xc000f67cc0) Stream removed, broadcasting: 3 I0701 00:14:46.660272 8 log.go:172] (0xc002861600) Go away received I0701 00:14:46.660330 8 log.go:172] (0xc002861600) (0xc0026dd900) Stream removed, broadcasting: 5 Jul 1 00:14:46.660: INFO: Exec stderr: "" Jul 1 00:14:46.660: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.660: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.693554 8 log.go:172] (0xc001eb5d90) (0xc001bb1f40) Create stream I0701 00:14:46.693592 8 log.go:172] (0xc001eb5d90) (0xc001bb1f40) Stream added, broadcasting: 1 I0701 00:14:46.695800 8 log.go:172] (0xc001eb5d90) Reply frame received for 1 I0701 00:14:46.695985 8 log.go:172] (0xc001eb5d90) (0xc000f67ea0) Create stream I0701 00:14:46.695998 8 log.go:172] (0xc001eb5d90) (0xc000f67ea0) Stream added, broadcasting: 3 I0701 00:14:46.697009 8 log.go:172] (0xc001eb5d90) Reply frame received for 3 I0701 00:14:46.697056 8 log.go:172] (0xc001eb5d90) (0xc0011f59a0) Create stream I0701 00:14:46.697071 8 log.go:172] (0xc001eb5d90) (0xc0011f59a0) Stream added, broadcasting: 5 I0701 00:14:46.698333 8 log.go:172] (0xc001eb5d90) Reply frame received for 5 I0701 00:14:46.767089 8 log.go:172] (0xc001eb5d90) Data frame received for 3 I0701 00:14:46.767118 8 log.go:172] (0xc000f67ea0) (3) Data frame handling I0701 00:14:46.767131 8 log.go:172] (0xc000f67ea0) (3) Data frame sent I0701 00:14:46.767149 8 log.go:172] (0xc001eb5d90) Data frame received for 5 I0701 00:14:46.767165 8 log.go:172] (0xc0011f59a0) (5) Data frame handling I0701 00:14:46.767182 8 log.go:172] (0xc001eb5d90) Data frame received for 3 I0701 00:14:46.767189 8 log.go:172] (0xc000f67ea0) (3) Data frame handling I0701 00:14:46.769020 8 log.go:172] (0xc001eb5d90) Data frame received for 1 I0701 00:14:46.769032 8 log.go:172] (0xc001bb1f40) (1) Data frame handling I0701 00:14:46.769041 8 log.go:172] (0xc001bb1f40) (1) Data frame sent I0701 00:14:46.769060 8 log.go:172] (0xc001eb5d90) (0xc001bb1f40) Stream removed, broadcasting: 1 I0701 00:14:46.769260 8 log.go:172] (0xc001eb5d90) (0xc001bb1f40) Stream removed, broadcasting: 1 I0701 00:14:46.769278 8 log.go:172] (0xc001eb5d90) Go away received I0701 00:14:46.769297 8 log.go:172] (0xc001eb5d90) (0xc000f67ea0) Stream removed, broadcasting: 3 I0701 00:14:46.769316 8 log.go:172] (0xc001eb5d90) (0xc0011f59a0) Stream removed, broadcasting: 5 Jul 1 00:14:46.769: INFO: Exec stderr: "" Jul 1 00:14:46.769: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3146 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:14:46.769: INFO: >>> kubeConfig: /root/.kube/config I0701 00:14:46.801849 8 log.go:172] (0xc00191bb80) (0xc0026ddae0) Create stream I0701 00:14:46.801876 8 log.go:172] (0xc00191bb80) (0xc0026ddae0) Stream added, broadcasting: 1 I0701 00:14:46.803685 8 log.go:172] (0xc00191bb80) Reply frame received for 1 I0701 00:14:46.803722 8 log.go:172] (0xc00191bb80) (0xc001742640) Create stream I0701 00:14:46.803731 8 log.go:172] (0xc00191bb80) (0xc001742640) Stream added, broadcasting: 3 I0701 00:14:46.804530 8 log.go:172] (0xc00191bb80) Reply frame received for 3 I0701 00:14:46.804565 8 log.go:172] (0xc00191bb80) (0xc000d9c280) Create stream I0701 00:14:46.804575 8 log.go:172] (0xc00191bb80) (0xc000d9c280) Stream added, broadcasting: 5 I0701 00:14:46.805821 8 log.go:172] (0xc00191bb80) Reply frame received for 5 I0701 00:14:46.863273 8 log.go:172] (0xc00191bb80) Data frame received for 5 I0701 00:14:46.863315 8 log.go:172] (0xc000d9c280) (5) Data frame handling I0701 00:14:46.863344 8 log.go:172] (0xc00191bb80) Data frame received for 3 I0701 00:14:46.863356 8 log.go:172] (0xc001742640) (3) Data frame handling I0701 00:14:46.863370 8 log.go:172] (0xc001742640) (3) Data frame sent I0701 00:14:46.863384 8 log.go:172] (0xc00191bb80) Data frame received for 3 I0701 00:14:46.863396 8 log.go:172] (0xc001742640) (3) Data frame handling I0701 00:14:46.864479 8 log.go:172] (0xc00191bb80) Data frame received for 1 I0701 00:14:46.864507 8 log.go:172] (0xc0026ddae0) (1) Data frame handling I0701 00:14:46.864524 8 log.go:172] (0xc0026ddae0) (1) Data frame sent I0701 00:14:46.864546 8 log.go:172] (0xc00191bb80) (0xc0026ddae0) Stream removed, broadcasting: 1 I0701 00:14:46.864574 8 log.go:172] (0xc00191bb80) Go away received I0701 00:14:46.864647 8 log.go:172] (0xc00191bb80) (0xc0026ddae0) Stream removed, broadcasting: 1 I0701 00:14:46.864663 8 log.go:172] (0xc00191bb80) (0xc001742640) Stream removed, broadcasting: 3 I0701 00:14:46.864673 8 log.go:172] (0xc00191bb80) (0xc000d9c280) Stream removed, broadcasting: 5 Jul 1 00:14:46.864: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:14:46.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3146" for this suite. • [SLOW TEST:11.172 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":108,"skipped":1812,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:14:46.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Jul 1 00:14:46.960: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:15:01.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6871" for this suite. • [SLOW TEST:14.283 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":294,"completed":109,"skipped":1824,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:15:01.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 1 00:15:01.245: INFO: Waiting up to 5m0s for pod "pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5" in namespace "emptydir-5630" to be "Succeeded or Failed" Jul 1 00:15:01.268: INFO: Pod "pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.919018ms Jul 1 00:15:03.272: INFO: Pod "pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026942127s Jul 1 00:15:05.276: INFO: Pod "pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031054723s STEP: Saw pod success Jul 1 00:15:05.276: INFO: Pod "pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5" satisfied condition "Succeeded or Failed" Jul 1 00:15:05.279: INFO: Trying to get logs from node latest-worker pod pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5 container test-container: STEP: delete the pod Jul 1 00:15:05.312: INFO: Waiting for pod pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5 to disappear Jul 1 00:15:05.316: INFO: Pod pod-3799332a-3bc7-47dd-a4b9-75dc798a41e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:15:05.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5630" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":110,"skipped":1837,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:15:05.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-2a8b5e33-a055-42b5-911d-cf8895bcfe35 STEP: Creating a pod to test consume secrets Jul 1 00:15:05.653: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad" in namespace "projected-7787" to be "Succeeded or Failed" Jul 1 00:15:05.683: INFO: Pod "pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad": Phase="Pending", Reason="", readiness=false. Elapsed: 29.408639ms Jul 1 00:15:07.687: INFO: Pod "pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033473039s Jul 1 00:15:09.693: INFO: Pod "pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039410919s STEP: Saw pod success Jul 1 00:15:09.693: INFO: Pod "pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad" satisfied condition "Succeeded or Failed" Jul 1 00:15:09.695: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad container secret-volume-test: STEP: delete the pod Jul 1 00:15:09.734: INFO: Waiting for pod pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad to disappear Jul 1 00:15:09.758: INFO: Pod pod-projected-secrets-5902f043-4f94-4937-8988-3713b9c301ad no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:15:09.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7787" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":111,"skipped":1878,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:15:09.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1853 STEP: creating service affinity-clusterip in namespace services-1853 STEP: creating replication controller affinity-clusterip in namespace services-1853 I0701 00:15:09.924057 8 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1853, replica count: 3 I0701 00:15:12.974513 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:15:15.974794 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:15:15.982: INFO: Creating new exec pod Jul 1 00:15:21.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1853 execpod-affinityjh9pq -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jul 1 00:15:21.289: INFO: stderr: "I0701 00:15:21.175162 996 log.go:172] (0xc000a056b0) (0xc0008472c0) Create stream\nI0701 00:15:21.175235 996 log.go:172] (0xc000a056b0) (0xc0008472c0) Stream added, broadcasting: 1\nI0701 00:15:21.178542 996 log.go:172] (0xc000a056b0) Reply frame received for 1\nI0701 00:15:21.178609 996 log.go:172] (0xc000a056b0) (0xc000680e60) Create stream\nI0701 00:15:21.178633 996 log.go:172] (0xc000a056b0) (0xc000680e60) Stream added, broadcasting: 3\nI0701 00:15:21.180783 996 log.go:172] (0xc000a056b0) Reply frame received for 3\nI0701 00:15:21.180826 996 log.go:172] (0xc000a056b0) (0xc0003708c0) Create stream\nI0701 00:15:21.180843 996 log.go:172] (0xc000a056b0) (0xc0003708c0) Stream added, broadcasting: 5\nI0701 00:15:21.182431 996 log.go:172] (0xc000a056b0) Reply frame received for 5\nI0701 00:15:21.271224 996 log.go:172] (0xc000a056b0) Data frame received for 5\nI0701 00:15:21.271258 996 log.go:172] (0xc0003708c0) (5) Data frame handling\nI0701 00:15:21.271318 996 log.go:172] (0xc0003708c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0701 00:15:21.282564 996 log.go:172] (0xc000a056b0) Data frame received for 5\nI0701 00:15:21.282593 996 log.go:172] (0xc0003708c0) (5) Data frame handling\nI0701 00:15:21.282613 996 log.go:172] (0xc0003708c0) (5) Data frame sent\nI0701 00:15:21.282631 996 log.go:172] (0xc000a056b0) Data frame received for 5\nI0701 00:15:21.282649 996 log.go:172] (0xc0003708c0) (5) Data frame handling\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0701 00:15:21.283054 996 log.go:172] (0xc000a056b0) Data frame received for 3\nI0701 00:15:21.283074 996 log.go:172] (0xc000680e60) (3) Data frame handling\nI0701 00:15:21.284076 996 log.go:172] (0xc000a056b0) Data frame received for 1\nI0701 00:15:21.284102 996 log.go:172] (0xc0008472c0) (1) Data frame handling\nI0701 00:15:21.284128 996 log.go:172] (0xc0008472c0) (1) Data frame sent\nI0701 00:15:21.284230 996 log.go:172] (0xc000a056b0) (0xc0008472c0) Stream removed, broadcasting: 1\nI0701 00:15:21.284451 996 log.go:172] (0xc000a056b0) (0xc0008472c0) Stream removed, broadcasting: 1\nI0701 00:15:21.284466 996 log.go:172] (0xc000a056b0) (0xc000680e60) Stream removed, broadcasting: 3\nI0701 00:15:21.284472 996 log.go:172] (0xc000a056b0) (0xc0003708c0) Stream removed, broadcasting: 5\n" Jul 1 00:15:21.290: INFO: stdout: "" Jul 1 00:15:21.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1853 execpod-affinityjh9pq -- /bin/sh -x -c nc -zv -t -w 2 10.111.92.158 80' Jul 1 00:15:21.520: INFO: stderr: "I0701 00:15:21.433891 1016 log.go:172] (0xc0009bb3f0) (0xc000852dc0) Create stream\nI0701 00:15:21.433949 1016 log.go:172] (0xc0009bb3f0) (0xc000852dc0) Stream added, broadcasting: 1\nI0701 00:15:21.436547 1016 log.go:172] (0xc0009bb3f0) Reply frame received for 1\nI0701 00:15:21.436585 1016 log.go:172] (0xc0009bb3f0) (0xc00030e820) Create stream\nI0701 00:15:21.436602 1016 log.go:172] (0xc0009bb3f0) (0xc00030e820) Stream added, broadcasting: 3\nI0701 00:15:21.437765 1016 log.go:172] (0xc0009bb3f0) Reply frame received for 3\nI0701 00:15:21.437820 1016 log.go:172] (0xc0009bb3f0) (0xc000853360) Create stream\nI0701 00:15:21.437832 1016 log.go:172] (0xc0009bb3f0) (0xc000853360) Stream added, broadcasting: 5\nI0701 00:15:21.438853 1016 log.go:172] (0xc0009bb3f0) Reply frame received for 5\nI0701 00:15:21.510820 1016 log.go:172] (0xc0009bb3f0) Data frame received for 3\nI0701 00:15:21.510866 1016 log.go:172] (0xc00030e820) (3) Data frame handling\nI0701 00:15:21.510904 1016 log.go:172] (0xc0009bb3f0) Data frame received for 5\nI0701 00:15:21.510943 1016 log.go:172] (0xc000853360) (5) Data frame handling\nI0701 00:15:21.510966 1016 log.go:172] (0xc000853360) (5) Data frame sent\nI0701 00:15:21.510976 1016 log.go:172] (0xc0009bb3f0) Data frame received for 5\nI0701 00:15:21.510984 1016 log.go:172] (0xc000853360) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.92.158 80\nConnection to 10.111.92.158 80 port [tcp/http] succeeded!\nI0701 00:15:21.512591 1016 log.go:172] (0xc0009bb3f0) Data frame received for 1\nI0701 00:15:21.512631 1016 log.go:172] (0xc000852dc0) (1) Data frame handling\nI0701 00:15:21.512658 1016 log.go:172] (0xc000852dc0) (1) Data frame sent\nI0701 00:15:21.512709 1016 log.go:172] (0xc0009bb3f0) (0xc000852dc0) Stream removed, broadcasting: 1\nI0701 00:15:21.512744 1016 log.go:172] (0xc0009bb3f0) Go away received\nI0701 00:15:21.513404 1016 log.go:172] (0xc0009bb3f0) (0xc000852dc0) Stream removed, broadcasting: 1\nI0701 00:15:21.513436 1016 log.go:172] (0xc0009bb3f0) (0xc00030e820) Stream removed, broadcasting: 3\nI0701 00:15:21.513449 1016 log.go:172] (0xc0009bb3f0) (0xc000853360) Stream removed, broadcasting: 5\n" Jul 1 00:15:21.520: INFO: stdout: "" Jul 1 00:15:21.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1853 execpod-affinityjh9pq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.92.158:80/ ; done' Jul 1 00:15:21.842: INFO: stderr: "I0701 00:15:21.648673 1036 log.go:172] (0xc00093f1e0) (0xc0000dd0e0) Create stream\nI0701 00:15:21.648747 1036 log.go:172] (0xc00093f1e0) (0xc0000dd0e0) Stream added, broadcasting: 1\nI0701 00:15:21.655416 1036 log.go:172] (0xc00093f1e0) Reply frame received for 1\nI0701 00:15:21.656364 1036 log.go:172] (0xc00093f1e0) (0xc000452f00) Create stream\nI0701 00:15:21.656415 1036 log.go:172] (0xc00093f1e0) (0xc000452f00) Stream added, broadcasting: 3\nI0701 00:15:21.657490 1036 log.go:172] (0xc00093f1e0) Reply frame received for 3\nI0701 00:15:21.657539 1036 log.go:172] (0xc00093f1e0) (0xc000539180) Create stream\nI0701 00:15:21.657559 1036 log.go:172] (0xc00093f1e0) (0xc000539180) Stream added, broadcasting: 5\nI0701 00:15:21.658277 1036 log.go:172] (0xc00093f1e0) Reply frame received for 5\nI0701 00:15:21.713834 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.713884 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.713902 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.713931 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.713944 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.713964 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.758514 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.758566 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.758600 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.759474 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.759513 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.759544 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.759577 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.759613 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.759642 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.764569 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.764589 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.764606 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.765713 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.765739 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.765757 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.765783 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.765809 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.765829 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.771309 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.771344 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.771365 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.771851 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.771873 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.771911 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.771929 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.771949 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.771960 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.776319 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.776353 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.776378 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.776953 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.776977 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.776990 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.777004 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.777019 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.777030 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.781700 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.781719 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.781733 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.782065 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.782089 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.782095 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.782104 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.782109 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.782113 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.786935 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.786955 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.786970 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.787617 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.787631 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.787648 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.787653 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.787661 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.787667 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.791968 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.792002 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.792031 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.792300 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.792312 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.792318 1036 log.go:172] (0xc000539180) (5) Data frame sent\nI0701 00:15:21.792324 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.792328 1036 log.go:172] (0xc000539180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.792340 1036 log.go:172] (0xc000539180) (5) Data frame sent\nI0701 00:15:21.792347 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.792352 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.792357 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.796283 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.796308 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.796324 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.796821 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.796836 1036 log.go:172] (0xc000539180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0701 00:15:21.796856 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.796892 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.796912 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.796938 1036 log.go:172] (0xc000539180) (5) Data frame sent\nI0701 00:15:21.796956 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.796972 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.796997 1036 log.go:172] (0xc000539180) (5) Data frame sent\n http://10.111.92.158:80/\nI0701 00:15:21.799836 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.799858 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.799867 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.800178 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.800191 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.800201 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.800269 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.800355 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.800370 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.804101 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.804114 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.804125 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.804567 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.804584 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.804595 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.804610 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.804617 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.804626 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.807942 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.807953 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.807959 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.808345 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.808359 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.808377 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.808394 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.808410 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.808427 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.812467 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.812481 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.812493 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.812871 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.812894 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.812913 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.812925 1036 log.go:172] (0xc000539180) (5) Data frame sent\nI0701 00:15:21.812933 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.812942 1036 log.go:172] (0xc000539180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.812958 1036 log.go:172] (0xc000539180) (5) Data frame sent\nI0701 00:15:21.812969 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.812977 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.816786 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.816797 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.816806 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.817401 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.817423 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.817446 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0701 00:15:21.817454 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.817472 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.817484 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.817499 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.817508 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.817518 1036 log.go:172] (0xc000539180) (5) Data frame sent\n 2 http://10.111.92.158:80/\nI0701 00:15:21.821376 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.821398 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.821423 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.821740 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.821757 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.821766 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ echo\nI0701 00:15:21.821815 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.821834 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.821845 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.821861 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.821876 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.821883 1036 log.go:172] (0xc000539180) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.826534 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.826559 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.826582 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.827026 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.827056 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.827068 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.827087 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.827100 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.827113 1036 log.go:172] (0xc000539180) (5) Data frame sent\nI0701 00:15:21.827126 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.827137 1036 log.go:172] (0xc000539180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.92.158:80/\nI0701 00:15:21.827160 1036 log.go:172] (0xc000539180) (5) Data frame sent\nI0701 00:15:21.830210 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.830237 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.830260 1036 log.go:172] (0xc000452f00) (3) Data frame sent\nI0701 00:15:21.830803 1036 log.go:172] (0xc00093f1e0) Data frame received for 3\nI0701 00:15:21.830820 1036 log.go:172] (0xc000452f00) (3) Data frame handling\nI0701 00:15:21.830876 1036 log.go:172] (0xc00093f1e0) Data frame received for 5\nI0701 00:15:21.830893 1036 log.go:172] (0xc000539180) (5) Data frame handling\nI0701 00:15:21.832773 1036 log.go:172] (0xc00093f1e0) Data frame received for 1\nI0701 00:15:21.832791 1036 log.go:172] (0xc0000dd0e0) (1) Data frame handling\nI0701 00:15:21.832801 1036 log.go:172] (0xc0000dd0e0) (1) Data frame sent\nI0701 00:15:21.832869 1036 log.go:172] (0xc00093f1e0) (0xc0000dd0e0) Stream removed, broadcasting: 1\nI0701 00:15:21.832915 1036 log.go:172] (0xc00093f1e0) Go away received\nI0701 00:15:21.833329 1036 log.go:172] (0xc00093f1e0) (0xc0000dd0e0) Stream removed, broadcasting: 1\nI0701 00:15:21.833341 1036 log.go:172] (0xc00093f1e0) (0xc000452f00) Stream removed, broadcasting: 3\nI0701 00:15:21.833346 1036 log.go:172] (0xc00093f1e0) (0xc000539180) Stream removed, broadcasting: 5\n" Jul 1 00:15:21.843: INFO: stdout: "\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg\naffinity-clusterip-9lrsg" Jul 1 00:15:21.843: INFO: Received response from host: Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Received response from host: affinity-clusterip-9lrsg Jul 1 00:15:21.843: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1853, will wait for the garbage collector to delete the pods Jul 1 00:15:21.958: INFO: Deleting ReplicationController affinity-clusterip took: 7.569422ms Jul 1 00:15:22.458: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.250493ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:15:35.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1853" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:25.646 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":112,"skipped":1893,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:15:35.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:15:35.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8" in namespace "projected-474" to be "Succeeded or Failed" Jul 1 00:15:35.539: INFO: Pod "downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.687868ms Jul 1 00:15:37.543: INFO: Pod "downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007549371s Jul 1 00:15:39.548: INFO: Pod "downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8": Phase="Running", Reason="", readiness=true. Elapsed: 4.01251159s Jul 1 00:15:41.552: INFO: Pod "downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017261555s STEP: Saw pod success Jul 1 00:15:41.552: INFO: Pod "downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8" satisfied condition "Succeeded or Failed" Jul 1 00:15:41.556: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8 container client-container: STEP: delete the pod Jul 1 00:15:41.587: INFO: Waiting for pod downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8 to disappear Jul 1 00:15:41.600: INFO: Pod downwardapi-volume-cc495063-dd23-4c94-8455-a3a1342ab2f8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:15:41.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-474" for this suite. • [SLOW TEST:6.196 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":113,"skipped":1911,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:15:41.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5457.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5457.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5457.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5457.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5457.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5457.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 00:15:47.770: INFO: DNS probes using dns-5457/dns-test-dd7855e4-66a2-4b26-8459-0af720667451 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:15:47.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5457" for this suite. • [SLOW TEST:6.264 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":294,"completed":114,"skipped":1913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:15:47.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 1 00:15:48.176: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 00:15:48.241: INFO: Waiting for terminating namespaces to be deleted... Jul 1 00:15:48.244: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 1 00:15:48.260: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.260: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jul 1 00:15:48.260: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.260: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jul 1 00:15:48.260: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.260: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:15:48.260: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.260: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:15:48.260: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 1 00:15:48.266: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.266: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jul 1 00:15:48.266: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.266: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jul 1 00:15:48.266: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.266: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:15:48.266: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:15:48.266: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Jul 1 00:15:48.432: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker Jul 1 00:15:48.432: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 Jul 1 00:15:48.432: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker Jul 1 00:15:48.432: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 Jul 1 00:15:48.432: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker Jul 1 00:15:48.432: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jul 1 00:15:48.432: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Jul 1 00:15:48.438: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-28a44300-5a97-4602-b001-bdf0370366d7.161d78b1dc0ef157], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8803/filler-pod-28a44300-5a97-4602-b001-bdf0370366d7 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a44300-5a97-4602-b001-bdf0370366d7.161d78b231239c38], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a44300-5a97-4602-b001-bdf0370366d7.161d78b2bc5821d8], Reason = [Created], Message = [Created container filler-pod-28a44300-5a97-4602-b001-bdf0370366d7] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a44300-5a97-4602-b001-bdf0370366d7.161d78b2db38fcb2], Reason = [Started], Message = [Started container filler-pod-28a44300-5a97-4602-b001-bdf0370366d7] STEP: Considering event: Type = [Normal], Name = [filler-pod-7df30526-a358-4031-a95c-c365fdddfc27.161d78b1def65c82], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8803/filler-pod-7df30526-a358-4031-a95c-c365fdddfc27 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7df30526-a358-4031-a95c-c365fdddfc27.161d78b23e95b0d5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7df30526-a358-4031-a95c-c365fdddfc27.161d78b2c5ae712b], Reason = [Created], Message = [Created container filler-pod-7df30526-a358-4031-a95c-c365fdddfc27] STEP: Considering event: Type = [Normal], Name = [filler-pod-7df30526-a358-4031-a95c-c365fdddfc27.161d78b2e14cb05e], Reason = [Started], Message = [Started container filler-pod-7df30526-a358-4031-a95c-c365fdddfc27] STEP: Considering event: Type = [Warning], Name = [additional-pod.161d78b345b36c01], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.161d78b348eb0428], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:15:55.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8803" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.762 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":294,"completed":115,"skipped":1946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:15:55.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9218 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9218 to expose endpoints map[] Jul 1 00:15:55.786: INFO: Get endpoints failed (9.684899ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 1 00:15:56.791: INFO: successfully validated that service multi-endpoint-test in namespace services-9218 exposes endpoints map[] (1.014255757s elapsed) STEP: Creating pod pod1 in namespace services-9218 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9218 to expose endpoints map[pod1:[100]] Jul 1 00:16:00.858: INFO: successfully validated that service multi-endpoint-test in namespace services-9218 exposes endpoints map[pod1:[100]] (4.057535159s elapsed) STEP: Creating pod pod2 in namespace services-9218 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9218 to expose endpoints map[pod1:[100] pod2:[101]] Jul 1 00:16:04.338: INFO: successfully validated that service multi-endpoint-test in namespace services-9218 exposes endpoints map[pod1:[100] pod2:[101]] (3.474119567s elapsed) STEP: Deleting pod pod1 in namespace services-9218 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9218 to expose endpoints map[pod2:[101]] Jul 1 00:16:05.420: INFO: successfully validated that service multi-endpoint-test in namespace services-9218 exposes endpoints map[pod2:[101]] (1.077335798s elapsed) STEP: Deleting pod pod2 in namespace services-9218 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9218 to expose endpoints map[] Jul 1 00:16:06.531: INFO: successfully validated that service multi-endpoint-test in namespace services-9218 exposes endpoints map[] (1.106440885s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:06.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9218" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:10.946 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":294,"completed":116,"skipped":1978,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:06.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:06.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-758" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":294,"completed":117,"skipped":1996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:06.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:16:07.595: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:16:09.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159367, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159367, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159367, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159367, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:16:12.913: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jul 1 00:16:12.959: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:13.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2943" for this suite. STEP: Destroying namespace "webhook-2943-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.366 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":294,"completed":118,"skipped":2028,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:13.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0701 00:16:14.290161 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:16:14.290: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:14.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2736" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":294,"completed":119,"skipped":2050,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:14.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:16:14.810: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:16:16.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:16:18.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159374, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:16:21.963: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:22.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2115" for this suite. STEP: Destroying namespace "webhook-2115-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.968 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":294,"completed":120,"skipped":2052,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:22.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 1 00:16:23.629: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 1 00:16:25.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:16:27.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729159383, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:16:30.693: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:16:30.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:31.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-500" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.767 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":294,"completed":121,"skipped":2060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:32.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 1 00:16:36.654: INFO: Successfully updated pod "pod-update-f10d64ef-2ee8-40f1-840b-d5e5a6d3d0bc" STEP: verifying the updated pod is in kubernetes Jul 1 00:16:36.701: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:36.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8185" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":294,"completed":122,"skipped":2103,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:36.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:36.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5632" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":294,"completed":123,"skipped":2113,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:36.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 1 00:16:45.084: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 00:16:45.088: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 00:16:47.088: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 00:16:47.092: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 00:16:49.088: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 00:16:49.093: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 00:16:51.088: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 00:16:51.093: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 00:16:53.088: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 00:16:53.106: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 00:16:55.088: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 00:16:55.094: INFO: Pod pod-with-poststart-exec-hook still exists Jul 1 00:16:57.088: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 1 00:16:57.093: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:16:57.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1299" for this suite. • [SLOW TEST:20.219 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":294,"completed":124,"skipped":2113,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:16:57.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 1 00:16:57.176: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:17:05.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1420" for this suite. • [SLOW TEST:8.028 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":294,"completed":125,"skipped":2132,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:17:05.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 1 00:17:09.204: INFO: &Pod{ObjectMeta:{send-events-ab3ee632-3ece-442c-b51e-e48cc08bb4e6 events-9609 /api/v1/namespaces/events-9609/pods/send-events-ab3ee632-3ece-442c-b51e-e48cc08bb4e6 5cadde9c-bc00-428a-b7ae-1ff165fc938c 17244219 0 2020-07-01 00:17:05 +0000 UTC map[name:foo time:171470285] map[] [] [] [{e2e.test Update v1 2020-07-01 00:17:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-07-01 00:17:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.130\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bjcv6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bjcv6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bjcv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:17:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:17:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-01 00:17:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.130,StartTime:2020-07-01 00:17:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-01 00:17:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://c7a49e0a1e40a082eaade955225ec7bebc53ec74e7a4beab3fe72d3e60cf2185,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.130,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jul 1 00:17:11.209: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 1 00:17:13.214: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:17:13.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9609" for this suite. • [SLOW TEST:8.127 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":294,"completed":126,"skipped":2140,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:17:13.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:17:47.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-519" for this suite. • [SLOW TEST:33.879 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":294,"completed":127,"skipped":2150,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:17:47.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0701 00:17:48.293927 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:17:48.293: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:17:48.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8133" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":294,"completed":128,"skipped":2161,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:17:48.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:17:48.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f" in namespace "projected-465" to be "Succeeded or Failed" Jul 1 00:17:48.665: INFO: Pod "downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 116.755127ms Jul 1 00:17:50.746: INFO: Pod "downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197023836s Jul 1 00:17:52.838: INFO: Pod "downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289125841s Jul 1 00:17:54.903: INFO: Pod "downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.354562958s STEP: Saw pod success Jul 1 00:17:54.903: INFO: Pod "downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f" satisfied condition "Succeeded or Failed" Jul 1 00:17:54.912: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f container client-container: STEP: delete the pod Jul 1 00:17:54.970: INFO: Waiting for pod downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f to disappear Jul 1 00:17:54.990: INFO: Pod downwardapi-volume-5710766c-1e9a-4af7-8ce0-9a12a0864f2f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:17:54.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-465" for this suite. • [SLOW TEST:6.697 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":294,"completed":129,"skipped":2166,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:17:54.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 1 00:17:55.069: INFO: Waiting up to 5m0s for pod "pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29" in namespace "emptydir-7965" to be "Succeeded or Failed" Jul 1 00:17:55.096: INFO: Pod "pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29": Phase="Pending", Reason="", readiness=false. Elapsed: 26.476785ms Jul 1 00:17:57.131: INFO: Pod "pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061451245s Jul 1 00:17:59.135: INFO: Pod "pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066012524s STEP: Saw pod success Jul 1 00:17:59.135: INFO: Pod "pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29" satisfied condition "Succeeded or Failed" Jul 1 00:17:59.138: INFO: Trying to get logs from node latest-worker pod pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29 container test-container: STEP: delete the pod Jul 1 00:17:59.216: INFO: Waiting for pod pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29 to disappear Jul 1 00:17:59.227: INFO: Pod pod-8c998472-b9ef-48f6-8bb4-b0d6399d1f29 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:17:59.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7965" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":130,"skipped":2184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:17:59.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:17:59.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc" in namespace "projected-7924" to be "Succeeded or Failed" Jul 1 00:17:59.342: INFO: Pod "downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.061751ms Jul 1 00:18:01.377: INFO: Pod "downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080940412s Jul 1 00:18:03.381: INFO: Pod "downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08542852s STEP: Saw pod success Jul 1 00:18:03.381: INFO: Pod "downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc" satisfied condition "Succeeded or Failed" Jul 1 00:18:03.384: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc container client-container: STEP: delete the pod Jul 1 00:18:03.438: INFO: Waiting for pod downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc to disappear Jul 1 00:18:03.445: INFO: Pod downwardapi-volume-772ac60b-98ec-4274-8ee3-fc239aed5bfc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:18:03.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7924" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":131,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:18:03.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 1 00:18:03.747: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-watch-closed 02e09a4c-8de7-4c43-b7b8-4914079582e1 17244584 0 2020-07-01 00:18:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-01 00:18:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:18:03.747: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-watch-closed 02e09a4c-8de7-4c43-b7b8-4914079582e1 17244585 0 2020-07-01 00:18:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-01 00:18:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 1 00:18:03.816: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-watch-closed 02e09a4c-8de7-4c43-b7b8-4914079582e1 17244586 0 2020-07-01 00:18:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-01 00:18:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:18:03.816: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-watch-closed 02e09a4c-8de7-4c43-b7b8-4914079582e1 17244587 0 2020-07-01 00:18:03 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-07-01 00:18:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:18:03.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1875" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":294,"completed":132,"skipped":2232,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:18:03.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:20:04.161: INFO: Deleting pod "var-expansion-732a22ca-1464-435a-8fb6-7494f1d49b59" in namespace "var-expansion-9882" Jul 1 00:20:04.166: INFO: Wait up to 5m0s for pod "var-expansion-732a22ca-1464-435a-8fb6-7494f1d49b59" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:20:08.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9882" for this suite. • [SLOW TEST:124.327 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":294,"completed":133,"skipped":2235,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:20:08.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:20:08.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d" in namespace "downward-api-8507" to be "Succeeded or Failed" Jul 1 00:20:08.359: INFO: Pod "downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d": Phase="Pending", Reason="", readiness=false. Elapsed: 68.067536ms Jul 1 00:20:10.365: INFO: Pod "downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073848732s Jul 1 00:20:12.369: INFO: Pod "downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077923616s STEP: Saw pod success Jul 1 00:20:12.369: INFO: Pod "downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d" satisfied condition "Succeeded or Failed" Jul 1 00:20:12.372: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d container client-container: STEP: delete the pod Jul 1 00:20:12.436: INFO: Waiting for pod downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d to disappear Jul 1 00:20:12.451: INFO: Pod downwardapi-volume-6c34f875-cad5-4d86-9586-3cb98dcb487d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:20:12.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8507" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":294,"completed":134,"skipped":2238,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:20:12.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-880.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-880.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-880.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-880.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-880.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-880.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 00:20:18.798: INFO: DNS probes using dns-880/dns-test-489ab1cb-2b73-4871-a051-50ffe72617c9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:20:18.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-880" for this suite. • [SLOW TEST:6.474 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":294,"completed":135,"skipped":2254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:20:18.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:20:25.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9196" for this suite. • [SLOW TEST:6.620 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":294,"completed":136,"skipped":2296,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:20:25.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 1 00:20:26.016: INFO: Waiting up to 5m0s for pod "downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7" in namespace "downward-api-2772" to be "Succeeded or Failed" Jul 1 00:20:26.066: INFO: Pod "downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 50.029659ms Jul 1 00:20:28.070: INFO: Pod "downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053951004s Jul 1 00:20:30.074: INFO: Pod "downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057865341s STEP: Saw pod success Jul 1 00:20:30.074: INFO: Pod "downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7" satisfied condition "Succeeded or Failed" Jul 1 00:20:30.076: INFO: Trying to get logs from node latest-worker2 pod downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7 container dapi-container: STEP: delete the pod Jul 1 00:20:30.226: INFO: Waiting for pod downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7 to disappear Jul 1 00:20:30.239: INFO: Pod downward-api-ae6bbd7b-3d70-4ea8-ad8b-3fb978dcb5f7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:20:30.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2772" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":294,"completed":137,"skipped":2299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:20:30.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9587 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9587;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9587 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9587;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9587.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9587.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9587.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9587.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9587.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9587.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9587.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.195.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.195.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.195.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.195.8_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9587 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9587;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9587 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9587;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9587.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9587.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9587.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9587.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9587.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9587.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9587.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9587.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9587.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.195.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.195.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.195.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.195.8_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 00:20:36.490: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.493: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.502: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.506: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.509: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.512: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.537: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.539: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.542: INFO: Unable to read jessie_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.545: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.548: INFO: Unable to read jessie_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.550: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.553: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.556: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:36.574: INFO: Lookups using dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9587 wheezy_tcp@dns-test-service.dns-9587 wheezy_udp@dns-test-service.dns-9587.svc wheezy_tcp@dns-test-service.dns-9587.svc wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9587 jessie_tcp@dns-test-service.dns-9587 jessie_udp@dns-test-service.dns-9587.svc jessie_tcp@dns-test-service.dns-9587.svc jessie_udp@_http._tcp.dns-test-service.dns-9587.svc jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc] Jul 1 00:20:41.578: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.583: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.586: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.590: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.593: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.596: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.600: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.603: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.624: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.627: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.630: INFO: Unable to read jessie_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.633: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.636: INFO: Unable to read jessie_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.639: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.643: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.646: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:41.667: INFO: Lookups using dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9587 wheezy_tcp@dns-test-service.dns-9587 wheezy_udp@dns-test-service.dns-9587.svc wheezy_tcp@dns-test-service.dns-9587.svc wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9587 jessie_tcp@dns-test-service.dns-9587 jessie_udp@dns-test-service.dns-9587.svc jessie_tcp@dns-test-service.dns-9587.svc jessie_udp@_http._tcp.dns-test-service.dns-9587.svc jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc] Jul 1 00:20:46.579: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.583: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.586: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.588: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.595: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.598: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.601: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.624: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.627: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.630: INFO: Unable to read jessie_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.632: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.635: INFO: Unable to read jessie_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.641: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.644: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:46.661: INFO: Lookups using dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9587 wheezy_tcp@dns-test-service.dns-9587 wheezy_udp@dns-test-service.dns-9587.svc wheezy_tcp@dns-test-service.dns-9587.svc wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9587 jessie_tcp@dns-test-service.dns-9587 jessie_udp@dns-test-service.dns-9587.svc jessie_tcp@dns-test-service.dns-9587.svc jessie_udp@_http._tcp.dns-test-service.dns-9587.svc jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc] Jul 1 00:20:51.579: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.582: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.586: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.588: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.593: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.596: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.599: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.622: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.624: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.628: INFO: Unable to read jessie_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.631: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.634: INFO: Unable to read jessie_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.637: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.640: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.642: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:51.659: INFO: Lookups using dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9587 wheezy_tcp@dns-test-service.dns-9587 wheezy_udp@dns-test-service.dns-9587.svc wheezy_tcp@dns-test-service.dns-9587.svc wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9587 jessie_tcp@dns-test-service.dns-9587 jessie_udp@dns-test-service.dns-9587.svc jessie_tcp@dns-test-service.dns-9587.svc jessie_udp@_http._tcp.dns-test-service.dns-9587.svc jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc] Jul 1 00:20:56.579: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.582: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.585: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.588: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.594: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.598: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.601: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.624: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.627: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.629: INFO: Unable to read jessie_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.632: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.635: INFO: Unable to read jessie_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.639: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.642: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.646: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:20:56.665: INFO: Lookups using dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9587 wheezy_tcp@dns-test-service.dns-9587 wheezy_udp@dns-test-service.dns-9587.svc wheezy_tcp@dns-test-service.dns-9587.svc wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9587 jessie_tcp@dns-test-service.dns-9587 jessie_udp@dns-test-service.dns-9587.svc jessie_tcp@dns-test-service.dns-9587.svc jessie_udp@_http._tcp.dns-test-service.dns-9587.svc jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc] Jul 1 00:21:01.578: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.618: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.621: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.624: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.627: INFO: Unable to read wheezy_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.630: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.632: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.635: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.654: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.657: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.660: INFO: Unable to read jessie_udp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.663: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587 from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.665: INFO: Unable to read jessie_udp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.671: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.673: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc from pod dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4: the server could not find the requested resource (get pods dns-test-9dd079d3-f931-497b-8852-b1f5176cada4) Jul 1 00:21:01.690: INFO: Lookups using dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9587 wheezy_tcp@dns-test-service.dns-9587 wheezy_udp@dns-test-service.dns-9587.svc wheezy_tcp@dns-test-service.dns-9587.svc wheezy_udp@_http._tcp.dns-test-service.dns-9587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9587 jessie_tcp@dns-test-service.dns-9587 jessie_udp@dns-test-service.dns-9587.svc jessie_tcp@dns-test-service.dns-9587.svc jessie_udp@_http._tcp.dns-test-service.dns-9587.svc jessie_tcp@_http._tcp.dns-test-service.dns-9587.svc] Jul 1 00:21:06.668: INFO: DNS probes using dns-9587/dns-test-9dd079d3-f931-497b-8852-b1f5176cada4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:21:07.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9587" for this suite. • [SLOW TEST:37.132 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":294,"completed":138,"skipped":2350,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:21:07.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-305906d9-3004-4821-ab7e-d9139f666716 STEP: Creating a pod to test consume secrets Jul 1 00:21:07.547: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf" in namespace "projected-7730" to be "Succeeded or Failed" Jul 1 00:21:07.551: INFO: Pod "pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722958ms Jul 1 00:21:09.555: INFO: Pod "pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007886385s Jul 1 00:21:11.559: INFO: Pod "pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011666993s Jul 1 00:21:13.563: INFO: Pod "pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015844709s STEP: Saw pod success Jul 1 00:21:13.563: INFO: Pod "pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf" satisfied condition "Succeeded or Failed" Jul 1 00:21:13.566: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf container projected-secret-volume-test: STEP: delete the pod Jul 1 00:21:13.643: INFO: Waiting for pod pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf to disappear Jul 1 00:21:13.647: INFO: Pod pod-projected-secrets-4935dcec-557c-42b6-8c2a-4cc8cbcd2caf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:21:13.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7730" for this suite. • [SLOW TEST:6.276 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":139,"skipped":2372,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:21:13.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:21:13.714: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd" in namespace "downward-api-9291" to be "Succeeded or Failed" Jul 1 00:21:13.728: INFO: Pod "downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.329407ms Jul 1 00:21:15.733: INFO: Pod "downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018514251s Jul 1 00:21:17.737: INFO: Pod "downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd": Phase="Running", Reason="", readiness=true. Elapsed: 4.022973445s Jul 1 00:21:19.742: INFO: Pod "downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027659653s STEP: Saw pod success Jul 1 00:21:19.742: INFO: Pod "downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd" satisfied condition "Succeeded or Failed" Jul 1 00:21:19.745: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd container client-container: STEP: delete the pod Jul 1 00:21:19.792: INFO: Waiting for pod downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd to disappear Jul 1 00:21:19.800: INFO: Pod downwardapi-volume-bc24340e-de24-4ec5-b73d-9e5723f0cbdd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:21:19.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9291" for this suite. • [SLOW TEST:6.153 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":294,"completed":140,"skipped":2378,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:21:19.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-fp6v STEP: Creating a pod to test atomic-volume-subpath Jul 1 00:21:19.928: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fp6v" in namespace "subpath-3241" to be "Succeeded or Failed" Jul 1 00:21:19.943: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Pending", Reason="", readiness=false. Elapsed: 14.296505ms Jul 1 00:21:21.947: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018425088s Jul 1 00:21:23.951: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 4.022560264s Jul 1 00:21:25.955: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 6.026194284s Jul 1 00:21:27.959: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 8.030348262s Jul 1 00:21:29.963: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 10.034515876s Jul 1 00:21:31.967: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 12.038179324s Jul 1 00:21:33.972: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 14.043201812s Jul 1 00:21:35.976: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 16.047807824s Jul 1 00:21:37.989: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 18.060536444s Jul 1 00:21:39.994: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 20.065133801s Jul 1 00:21:41.998: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Running", Reason="", readiness=true. Elapsed: 22.069982077s Jul 1 00:21:44.003: INFO: Pod "pod-subpath-test-secret-fp6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074206194s STEP: Saw pod success Jul 1 00:21:44.003: INFO: Pod "pod-subpath-test-secret-fp6v" satisfied condition "Succeeded or Failed" Jul 1 00:21:44.006: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-fp6v container test-container-subpath-secret-fp6v: STEP: delete the pod Jul 1 00:21:44.058: INFO: Waiting for pod pod-subpath-test-secret-fp6v to disappear Jul 1 00:21:44.090: INFO: Pod pod-subpath-test-secret-fp6v no longer exists STEP: Deleting pod pod-subpath-test-secret-fp6v Jul 1 00:21:44.090: INFO: Deleting pod "pod-subpath-test-secret-fp6v" in namespace "subpath-3241" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:21:44.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3241" for this suite. • [SLOW TEST:24.294 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":294,"completed":141,"skipped":2391,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:21:44.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:21:44.152: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jul 1 00:21:46.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 create -f -' Jul 1 00:21:52.264: INFO: stderr: "" Jul 1 00:21:52.264: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 1 00:21:52.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 delete e2e-test-crd-publish-openapi-1848-crds test-foo' Jul 1 00:21:52.370: INFO: stderr: "" Jul 1 00:21:52.370: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jul 1 00:21:52.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 apply -f -' Jul 1 00:21:57.153: INFO: stderr: "" Jul 1 00:21:57.153: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jul 1 00:21:57.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 delete e2e-test-crd-publish-openapi-1848-crds test-foo' Jul 1 00:21:57.251: INFO: stderr: "" Jul 1 00:21:57.251: INFO: stdout: "e2e-test-crd-publish-openapi-1848-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jul 1 00:21:57.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 create -f -' Jul 1 00:21:59.771: INFO: rc: 1 Jul 1 00:21:59.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 apply -f -' Jul 1 00:22:02.867: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jul 1 00:22:02.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 create -f -' Jul 1 00:22:06.263: INFO: rc: 1 Jul 1 00:22:06.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3672 apply -f -' Jul 1 00:22:09.826: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jul 1 00:22:09.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1848-crds' Jul 1 00:22:12.769: INFO: stderr: "" Jul 1 00:22:12.769: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jul 1 00:22:12.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1848-crds.metadata' Jul 1 00:22:15.759: INFO: stderr: "" Jul 1 00:22:15.759: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jul 1 00:22:15.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1848-crds.spec' Jul 1 00:22:18.692: INFO: stderr: "" Jul 1 00:22:18.692: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jul 1 00:22:18.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1848-crds.spec.bars' Jul 1 00:22:18.977: INFO: stderr: "" Jul 1 00:22:18.977: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1848-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jul 1 00:22:18.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1848-crds.spec.bars2' Jul 1 00:22:19.253: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:22:22.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3672" for this suite. • [SLOW TEST:38.057 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":294,"completed":142,"skipped":2403,"failed":0} [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:22:22.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:22:22.241: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c87f6fe7-54d4-4af3-8f31-1832d1a70993" in namespace "security-context-test-7626" to be "Succeeded or Failed" Jul 1 00:22:22.245: INFO: Pod "alpine-nnp-false-c87f6fe7-54d4-4af3-8f31-1832d1a70993": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281018ms Jul 1 00:22:24.249: INFO: Pod "alpine-nnp-false-c87f6fe7-54d4-4af3-8f31-1832d1a70993": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007370767s Jul 1 00:22:26.254: INFO: Pod "alpine-nnp-false-c87f6fe7-54d4-4af3-8f31-1832d1a70993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012060822s Jul 1 00:22:26.254: INFO: Pod "alpine-nnp-false-c87f6fe7-54d4-4af3-8f31-1832d1a70993" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:22:26.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7626" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":143,"skipped":2403,"failed":0} SSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:22:26.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jul 1 00:22:26.420: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jul 1 00:22:26.443: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 1 00:22:26.443: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jul 1 00:22:26.466: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jul 1 00:22:26.467: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jul 1 00:22:26.526: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jul 1 00:22:26.526: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jul 1 00:22:33.989: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:22:34.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9356" for this suite. • [SLOW TEST:7.810 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":294,"completed":144,"skipped":2408,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:22:34.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:22:52.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6266" for this suite. • [SLOW TEST:18.136 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":294,"completed":145,"skipped":2427,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:22:52.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-6b80ae60-56b6-4e3a-aef8-7f3c13b2da11 STEP: Creating a pod to test consume configMaps Jul 1 00:22:52.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531" in namespace "configmap-1776" to be "Succeeded or Failed" Jul 1 00:22:52.311: INFO: Pod "pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531": Phase="Pending", Reason="", readiness=false. Elapsed: 3.743655ms Jul 1 00:22:54.396: INFO: Pod "pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088822387s Jul 1 00:22:56.412: INFO: Pod "pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104033025s STEP: Saw pod success Jul 1 00:22:56.412: INFO: Pod "pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531" satisfied condition "Succeeded or Failed" Jul 1 00:22:56.414: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531 container configmap-volume-test: STEP: delete the pod Jul 1 00:22:56.478: INFO: Waiting for pod pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531 to disappear Jul 1 00:22:56.504: INFO: Pod pod-configmaps-ef794203-b011-4610-bf4c-0aec6f263531 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:22:56.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1776" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":146,"skipped":2430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:22:56.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jul 1 00:22:56.600: INFO: >>> kubeConfig: /root/.kube/config Jul 1 00:22:59.535: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:23:10.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4843" for this suite. • [SLOW TEST:13.556 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":294,"completed":147,"skipped":2496,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:23:10.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-9e4e648e-cb11-479b-8658-8f75978f3f40 STEP: Creating a pod to test consume secrets Jul 1 00:23:10.194: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80" in namespace "projected-9048" to be "Succeeded or Failed" Jul 1 00:23:10.222: INFO: Pod "pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80": Phase="Pending", Reason="", readiness=false. Elapsed: 28.457325ms Jul 1 00:23:12.391: INFO: Pod "pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197076229s Jul 1 00:23:14.395: INFO: Pod "pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.201602145s STEP: Saw pod success Jul 1 00:23:14.395: INFO: Pod "pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80" satisfied condition "Succeeded or Failed" Jul 1 00:23:14.399: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80 container projected-secret-volume-test: STEP: delete the pod Jul 1 00:23:14.470: INFO: Waiting for pod pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80 to disappear Jul 1 00:23:14.546: INFO: Pod pod-projected-secrets-ad07b00e-ddb5-462e-9a73-371146f21b80 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:23:14.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9048" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":148,"skipped":2506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:23:14.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 in namespace container-probe-496 Jul 1 00:23:18.698: INFO: Started pod liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 in namespace container-probe-496 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 00:23:18.702: INFO: Initial restart count of pod liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 is 0 Jul 1 00:23:36.766: INFO: Restart count of pod container-probe-496/liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 is now 1 (18.064009617s elapsed) Jul 1 00:23:56.849: INFO: Restart count of pod container-probe-496/liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 is now 2 (38.14740661s elapsed) Jul 1 00:24:16.916: INFO: Restart count of pod container-probe-496/liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 is now 3 (58.214085847s elapsed) Jul 1 00:24:36.964: INFO: Restart count of pod container-probe-496/liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 is now 4 (1m18.262598221s elapsed) Jul 1 00:25:37.223: INFO: Restart count of pod container-probe-496/liveness-28ecb3b0-44a4-4ece-8e0a-a9f642a389a3 is now 5 (2m18.521281053s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:25:37.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-496" for this suite. • [SLOW TEST:142.692 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":294,"completed":149,"skipped":2547,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:25:37.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9711 Jul 1 00:25:41.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9711 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jul 1 00:25:41.673: INFO: stderr: "I0701 00:25:41.515073 1355 log.go:172] (0xc00003a420) (0xc000310500) Create stream\nI0701 00:25:41.515140 1355 log.go:172] (0xc00003a420) (0xc000310500) Stream added, broadcasting: 1\nI0701 00:25:41.517669 1355 log.go:172] (0xc00003a420) Reply frame received for 1\nI0701 00:25:41.517701 1355 log.go:172] (0xc00003a420) (0xc0006fb040) Create stream\nI0701 00:25:41.517709 1355 log.go:172] (0xc00003a420) (0xc0006fb040) Stream added, broadcasting: 3\nI0701 00:25:41.518753 1355 log.go:172] (0xc00003a420) Reply frame received for 3\nI0701 00:25:41.518799 1355 log.go:172] (0xc00003a420) (0xc000310dc0) Create stream\nI0701 00:25:41.518814 1355 log.go:172] (0xc00003a420) (0xc000310dc0) Stream added, broadcasting: 5\nI0701 00:25:41.519783 1355 log.go:172] (0xc00003a420) Reply frame received for 5\nI0701 00:25:41.635250 1355 log.go:172] (0xc00003a420) Data frame received for 5\nI0701 00:25:41.635276 1355 log.go:172] (0xc000310dc0) (5) Data frame handling\nI0701 00:25:41.635298 1355 log.go:172] (0xc000310dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0701 00:25:41.662661 1355 log.go:172] (0xc00003a420) Data frame received for 3\nI0701 00:25:41.662686 1355 log.go:172] (0xc0006fb040) (3) Data frame handling\nI0701 00:25:41.662699 1355 log.go:172] (0xc0006fb040) (3) Data frame sent\nI0701 00:25:41.663238 1355 log.go:172] (0xc00003a420) Data frame received for 3\nI0701 00:25:41.663389 1355 log.go:172] (0xc0006fb040) (3) Data frame handling\nI0701 00:25:41.663756 1355 log.go:172] (0xc00003a420) Data frame received for 5\nI0701 00:25:41.663788 1355 log.go:172] (0xc000310dc0) (5) Data frame handling\nI0701 00:25:41.665451 1355 log.go:172] (0xc00003a420) Data frame received for 1\nI0701 00:25:41.665482 1355 log.go:172] (0xc000310500) (1) Data frame handling\nI0701 00:25:41.665501 1355 log.go:172] (0xc000310500) (1) Data frame sent\nI0701 00:25:41.665543 1355 log.go:172] (0xc00003a420) (0xc000310500) Stream removed, broadcasting: 1\nI0701 00:25:41.665791 1355 log.go:172] (0xc00003a420) Go away received\nI0701 00:25:41.666091 1355 log.go:172] (0xc00003a420) (0xc000310500) Stream removed, broadcasting: 1\nI0701 00:25:41.666112 1355 log.go:172] (0xc00003a420) (0xc0006fb040) Stream removed, broadcasting: 3\nI0701 00:25:41.666123 1355 log.go:172] (0xc00003a420) (0xc000310dc0) Stream removed, broadcasting: 5\n" Jul 1 00:25:41.673: INFO: stdout: "iptables" Jul 1 00:25:41.673: INFO: proxyMode: iptables Jul 1 00:25:41.678: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:41.681: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:25:43.681: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:43.686: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:25:45.681: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:45.686: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:25:47.681: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:47.685: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:25:49.681: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:49.686: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:25:51.681: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:51.685: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:25:53.681: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:53.685: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:25:55.681: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:25:55.685: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9711 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9711 I0701 00:25:55.763600 8 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9711, replica count: 3 I0701 00:25:58.814050 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:26:01.814301 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:26:01.877: INFO: Creating new exec pod Jul 1 00:26:06.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9711 execpod-affinity9s842 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jul 1 00:26:07.210: INFO: stderr: "I0701 00:26:07.072870 1375 log.go:172] (0xc000bb3290) (0xc000b7a320) Create stream\nI0701 00:26:07.072955 1375 log.go:172] (0xc000bb3290) (0xc000b7a320) Stream added, broadcasting: 1\nI0701 00:26:07.078113 1375 log.go:172] (0xc000bb3290) Reply frame received for 1\nI0701 00:26:07.078149 1375 log.go:172] (0xc000bb3290) (0xc00072cc80) Create stream\nI0701 00:26:07.078160 1375 log.go:172] (0xc000bb3290) (0xc00072cc80) Stream added, broadcasting: 3\nI0701 00:26:07.079083 1375 log.go:172] (0xc000bb3290) Reply frame received for 3\nI0701 00:26:07.079118 1375 log.go:172] (0xc000bb3290) (0xc000744b40) Create stream\nI0701 00:26:07.079128 1375 log.go:172] (0xc000bb3290) (0xc000744b40) Stream added, broadcasting: 5\nI0701 00:26:07.079887 1375 log.go:172] (0xc000bb3290) Reply frame received for 5\nI0701 00:26:07.193461 1375 log.go:172] (0xc000bb3290) Data frame received for 5\nI0701 00:26:07.193503 1375 log.go:172] (0xc000744b40) (5) Data frame handling\nI0701 00:26:07.193525 1375 log.go:172] (0xc000744b40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0701 00:26:07.198838 1375 log.go:172] (0xc000bb3290) Data frame received for 5\nI0701 00:26:07.198852 1375 log.go:172] (0xc000744b40) (5) Data frame handling\nI0701 00:26:07.198859 1375 log.go:172] (0xc000744b40) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0701 00:26:07.199212 1375 log.go:172] (0xc000bb3290) Data frame received for 3\nI0701 00:26:07.199234 1375 log.go:172] (0xc000bb3290) Data frame received for 5\nI0701 00:26:07.199251 1375 log.go:172] (0xc000744b40) (5) Data frame handling\nI0701 00:26:07.199264 1375 log.go:172] (0xc00072cc80) (3) Data frame handling\nI0701 00:26:07.201305 1375 log.go:172] (0xc000bb3290) Data frame received for 1\nI0701 00:26:07.201343 1375 log.go:172] (0xc000b7a320) (1) Data frame handling\nI0701 00:26:07.201359 1375 log.go:172] (0xc000b7a320) (1) Data frame sent\nI0701 00:26:07.201486 1375 log.go:172] (0xc000bb3290) (0xc000b7a320) Stream removed, broadcasting: 1\nI0701 00:26:07.201736 1375 log.go:172] (0xc000bb3290) Go away received\nI0701 00:26:07.201777 1375 log.go:172] (0xc000bb3290) (0xc000b7a320) Stream removed, broadcasting: 1\nI0701 00:26:07.201788 1375 log.go:172] (0xc000bb3290) (0xc00072cc80) Stream removed, broadcasting: 3\nI0701 00:26:07.201793 1375 log.go:172] (0xc000bb3290) (0xc000744b40) Stream removed, broadcasting: 5\n" Jul 1 00:26:07.210: INFO: stdout: "" Jul 1 00:26:07.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9711 execpod-affinity9s842 -- /bin/sh -x -c nc -zv -t -w 2 10.108.211.51 80' Jul 1 00:26:07.563: INFO: stderr: "I0701 00:26:07.338512 1395 log.go:172] (0xc000b071e0) (0xc000c36140) Create stream\nI0701 00:26:07.338559 1395 log.go:172] (0xc000b071e0) (0xc000c36140) Stream added, broadcasting: 1\nI0701 00:26:07.342454 1395 log.go:172] (0xc000b071e0) Reply frame received for 1\nI0701 00:26:07.342499 1395 log.go:172] (0xc000b071e0) (0xc000681cc0) Create stream\nI0701 00:26:07.342513 1395 log.go:172] (0xc000b071e0) (0xc000681cc0) Stream added, broadcasting: 3\nI0701 00:26:07.343286 1395 log.go:172] (0xc000b071e0) Reply frame received for 3\nI0701 00:26:07.343310 1395 log.go:172] (0xc000b071e0) (0xc0005b00a0) Create stream\nI0701 00:26:07.343319 1395 log.go:172] (0xc000b071e0) (0xc0005b00a0) Stream added, broadcasting: 5\nI0701 00:26:07.344201 1395 log.go:172] (0xc000b071e0) Reply frame received for 5\nI0701 00:26:07.556662 1395 log.go:172] (0xc000b071e0) Data frame received for 5\nI0701 00:26:07.556686 1395 log.go:172] (0xc0005b00a0) (5) Data frame handling\nI0701 00:26:07.556705 1395 log.go:172] (0xc0005b00a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.211.51 80\nConnection to 10.108.211.51 80 port [tcp/http] succeeded!\nI0701 00:26:07.556795 1395 log.go:172] (0xc000b071e0) Data frame received for 5\nI0701 00:26:07.556826 1395 log.go:172] (0xc000b071e0) Data frame received for 3\nI0701 00:26:07.556861 1395 log.go:172] (0xc000681cc0) (3) Data frame handling\nI0701 00:26:07.556875 1395 log.go:172] (0xc0005b00a0) (5) Data frame handling\nI0701 00:26:07.558115 1395 log.go:172] (0xc000b071e0) Data frame received for 1\nI0701 00:26:07.558130 1395 log.go:172] (0xc000c36140) (1) Data frame handling\nI0701 00:26:07.558142 1395 log.go:172] (0xc000c36140) (1) Data frame sent\nI0701 00:26:07.558151 1395 log.go:172] (0xc000b071e0) (0xc000c36140) Stream removed, broadcasting: 1\nI0701 00:26:07.558359 1395 log.go:172] (0xc000b071e0) Go away received\nI0701 00:26:07.558663 1395 log.go:172] (0xc000b071e0) (0xc000c36140) Stream removed, broadcasting: 1\nI0701 00:26:07.558685 1395 log.go:172] (0xc000b071e0) (0xc000681cc0) Stream removed, broadcasting: 3\nI0701 00:26:07.558697 1395 log.go:172] (0xc000b071e0) (0xc0005b00a0) Stream removed, broadcasting: 5\n" Jul 1 00:26:07.563: INFO: stdout: "" Jul 1 00:26:07.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9711 execpod-affinity9s842 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.211.51:80/ ; done' Jul 1 00:26:07.955: INFO: stderr: "I0701 00:26:07.690474 1414 log.go:172] (0xc000a274a0) (0xc000bf41e0) Create stream\nI0701 00:26:07.690521 1414 log.go:172] (0xc000a274a0) (0xc000bf41e0) Stream added, broadcasting: 1\nI0701 00:26:07.695153 1414 log.go:172] (0xc000a274a0) Reply frame received for 1\nI0701 00:26:07.695183 1414 log.go:172] (0xc000a274a0) (0xc000868dc0) Create stream\nI0701 00:26:07.695191 1414 log.go:172] (0xc000a274a0) (0xc000868dc0) Stream added, broadcasting: 3\nI0701 00:26:07.696200 1414 log.go:172] (0xc000a274a0) Reply frame received for 3\nI0701 00:26:07.696232 1414 log.go:172] (0xc000a274a0) (0xc000858640) Create stream\nI0701 00:26:07.696238 1414 log.go:172] (0xc000a274a0) (0xc000858640) Stream added, broadcasting: 5\nI0701 00:26:07.697898 1414 log.go:172] (0xc000a274a0) Reply frame received for 5\nI0701 00:26:07.749788 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.749822 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.749836 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.749855 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.749868 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.749878 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.861952 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.862035 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.862065 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.862104 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.862194 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.862228 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.862270 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.862293 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.862365 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.868278 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.868303 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.868331 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.868583 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.868604 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.868612 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.868700 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.868718 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.868733 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.876763 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.876780 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.876789 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.877404 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.877415 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.877421 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.877447 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.877466 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.877474 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.881393 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.881415 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.881431 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.882170 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.882189 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.882197 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.882214 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.882241 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.882260 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.885698 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.885726 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.885746 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.886090 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.886111 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.886119 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.886130 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.886136 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.886143 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.889827 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.889858 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.889875 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.889898 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.889911 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.889921 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.889955 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.889982 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.889994 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.897016 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.897043 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.897065 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.897768 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.897806 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.897822 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.897852 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.897864 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.897879 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.901687 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.901712 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.901732 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.902024 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.902048 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.902059 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.902096 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.902129 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.902148 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.905770 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.905787 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.905802 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.906342 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.906368 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.906383 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.906404 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.906414 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.906424 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.910002 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.910037 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.910073 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.910307 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.910338 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.910349 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.910365 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.910373 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.910386 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.914897 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.914943 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.914985 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.915225 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.915246 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.915270 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.915294 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.915312 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.915334 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.922772 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.922787 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.922795 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.923451 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.923469 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.923481 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.923494 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.923505 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.923515 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.928010 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.928049 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.928090 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.928375 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.928388 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.928394 1414 log.go:172] (0xc000858640) (5) Data frame sent\nI0701 00:26:07.928404 1414 log.go:172] (0xc000a274a0) Data frame received for 5\n+ echo\n+ curl -qI0701 00:26:07.928425 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.928458 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.928482 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.928509 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.928531 1414 log.go:172] (0xc000858640) (5) Data frame sent\n -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.932947 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.932968 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.932986 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.934166 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.934180 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.934187 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.934203 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.934223 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.934237 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.938153 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.938263 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.938308 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.939008 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.939024 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.939032 1414 log.go:172] (0xc000858640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:07.939041 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.939046 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.939059 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.945589 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.945618 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.945648 1414 log.go:172] (0xc000868dc0) (3) Data frame sent\nI0701 00:26:07.946123 1414 log.go:172] (0xc000a274a0) Data frame received for 3\nI0701 00:26:07.946166 1414 log.go:172] (0xc000868dc0) (3) Data frame handling\nI0701 00:26:07.946576 1414 log.go:172] (0xc000a274a0) Data frame received for 5\nI0701 00:26:07.946589 1414 log.go:172] (0xc000858640) (5) Data frame handling\nI0701 00:26:07.947933 1414 log.go:172] (0xc000a274a0) Data frame received for 1\nI0701 00:26:07.947956 1414 log.go:172] (0xc000bf41e0) (1) Data frame handling\nI0701 00:26:07.947974 1414 log.go:172] (0xc000bf41e0) (1) Data frame sent\nI0701 00:26:07.947993 1414 log.go:172] (0xc000a274a0) (0xc000bf41e0) Stream removed, broadcasting: 1\nI0701 00:26:07.948023 1414 log.go:172] (0xc000a274a0) Go away received\nI0701 00:26:07.948397 1414 log.go:172] (0xc000a274a0) (0xc000bf41e0) Stream removed, broadcasting: 1\nI0701 00:26:07.948420 1414 log.go:172] (0xc000a274a0) (0xc000868dc0) Stream removed, broadcasting: 3\nI0701 00:26:07.948431 1414 log.go:172] (0xc000a274a0) (0xc000858640) Stream removed, broadcasting: 5\n" Jul 1 00:26:07.956: INFO: stdout: "\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr\naffinity-clusterip-timeout-j52rr" Jul 1 00:26:07.956: INFO: Received response from host: Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Received response from host: affinity-clusterip-timeout-j52rr Jul 1 00:26:07.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9711 execpod-affinity9s842 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.108.211.51:80/' Jul 1 00:26:08.203: INFO: stderr: "I0701 00:26:08.090236 1435 log.go:172] (0xc0006000b0) (0xc0007545a0) Create stream\nI0701 00:26:08.090305 1435 log.go:172] (0xc0006000b0) (0xc0007545a0) Stream added, broadcasting: 1\nI0701 00:26:08.092689 1435 log.go:172] (0xc0006000b0) Reply frame received for 1\nI0701 00:26:08.092728 1435 log.go:172] (0xc0006000b0) (0xc000358be0) Create stream\nI0701 00:26:08.092745 1435 log.go:172] (0xc0006000b0) (0xc000358be0) Stream added, broadcasting: 3\nI0701 00:26:08.093545 1435 log.go:172] (0xc0006000b0) Reply frame received for 3\nI0701 00:26:08.093565 1435 log.go:172] (0xc0006000b0) (0xc000139c20) Create stream\nI0701 00:26:08.093572 1435 log.go:172] (0xc0006000b0) (0xc000139c20) Stream added, broadcasting: 5\nI0701 00:26:08.094300 1435 log.go:172] (0xc0006000b0) Reply frame received for 5\nI0701 00:26:08.195426 1435 log.go:172] (0xc0006000b0) Data frame received for 5\nI0701 00:26:08.195455 1435 log.go:172] (0xc000139c20) (5) Data frame handling\nI0701 00:26:08.195464 1435 log.go:172] (0xc000139c20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:08.196326 1435 log.go:172] (0xc0006000b0) Data frame received for 3\nI0701 00:26:08.196341 1435 log.go:172] (0xc000358be0) (3) Data frame handling\nI0701 00:26:08.196353 1435 log.go:172] (0xc000358be0) (3) Data frame sent\nI0701 00:26:08.196787 1435 log.go:172] (0xc0006000b0) Data frame received for 5\nI0701 00:26:08.196816 1435 log.go:172] (0xc000139c20) (5) Data frame handling\nI0701 00:26:08.196854 1435 log.go:172] (0xc0006000b0) Data frame received for 3\nI0701 00:26:08.196867 1435 log.go:172] (0xc000358be0) (3) Data frame handling\nI0701 00:26:08.198415 1435 log.go:172] (0xc0006000b0) Data frame received for 1\nI0701 00:26:08.198430 1435 log.go:172] (0xc0007545a0) (1) Data frame handling\nI0701 00:26:08.198436 1435 log.go:172] (0xc0007545a0) (1) Data frame sent\nI0701 00:26:08.198447 1435 log.go:172] (0xc0006000b0) (0xc0007545a0) Stream removed, broadcasting: 1\nI0701 00:26:08.198457 1435 log.go:172] (0xc0006000b0) Go away received\nI0701 00:26:08.198733 1435 log.go:172] (0xc0006000b0) (0xc0007545a0) Stream removed, broadcasting: 1\nI0701 00:26:08.198748 1435 log.go:172] (0xc0006000b0) (0xc000358be0) Stream removed, broadcasting: 3\nI0701 00:26:08.198753 1435 log.go:172] (0xc0006000b0) (0xc000139c20) Stream removed, broadcasting: 5\n" Jul 1 00:26:08.204: INFO: stdout: "affinity-clusterip-timeout-j52rr" Jul 1 00:26:23.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9711 execpod-affinity9s842 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.108.211.51:80/' Jul 1 00:26:23.469: INFO: stderr: "I0701 00:26:23.342668 1456 log.go:172] (0xc000ad9290) (0xc000b68500) Create stream\nI0701 00:26:23.342751 1456 log.go:172] (0xc000ad9290) (0xc000b68500) Stream added, broadcasting: 1\nI0701 00:26:23.348748 1456 log.go:172] (0xc000ad9290) Reply frame received for 1\nI0701 00:26:23.348790 1456 log.go:172] (0xc000ad9290) (0xc000512000) Create stream\nI0701 00:26:23.348798 1456 log.go:172] (0xc000ad9290) (0xc000512000) Stream added, broadcasting: 3\nI0701 00:26:23.350085 1456 log.go:172] (0xc000ad9290) Reply frame received for 3\nI0701 00:26:23.350130 1456 log.go:172] (0xc000ad9290) (0xc0004d9180) Create stream\nI0701 00:26:23.350145 1456 log.go:172] (0xc000ad9290) (0xc0004d9180) Stream added, broadcasting: 5\nI0701 00:26:23.351052 1456 log.go:172] (0xc000ad9290) Reply frame received for 5\nI0701 00:26:23.424142 1456 log.go:172] (0xc000ad9290) Data frame received for 5\nI0701 00:26:23.424179 1456 log.go:172] (0xc0004d9180) (5) Data frame handling\nI0701 00:26:23.424206 1456 log.go:172] (0xc0004d9180) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.108.211.51:80/\nI0701 00:26:23.456606 1456 log.go:172] (0xc000ad9290) Data frame received for 3\nI0701 00:26:23.456628 1456 log.go:172] (0xc000512000) (3) Data frame handling\nI0701 00:26:23.456643 1456 log.go:172] (0xc000512000) (3) Data frame sent\nI0701 00:26:23.457674 1456 log.go:172] (0xc000ad9290) Data frame received for 3\nI0701 00:26:23.457696 1456 log.go:172] (0xc000512000) (3) Data frame handling\nI0701 00:26:23.458066 1456 log.go:172] (0xc000ad9290) Data frame received for 5\nI0701 00:26:23.458083 1456 log.go:172] (0xc0004d9180) (5) Data frame handling\nI0701 00:26:23.459207 1456 log.go:172] (0xc000ad9290) Data frame received for 1\nI0701 00:26:23.459221 1456 log.go:172] (0xc000b68500) (1) Data frame handling\nI0701 00:26:23.459228 1456 log.go:172] (0xc000b68500) (1) Data frame sent\nI0701 00:26:23.459386 1456 log.go:172] (0xc000ad9290) (0xc000b68500) Stream removed, broadcasting: 1\nI0701 00:26:23.459447 1456 log.go:172] (0xc000ad9290) Go away received\nI0701 00:26:23.459747 1456 log.go:172] (0xc000ad9290) (0xc000b68500) Stream removed, broadcasting: 1\nI0701 00:26:23.459772 1456 log.go:172] (0xc000ad9290) (0xc000512000) Stream removed, broadcasting: 3\nI0701 00:26:23.459782 1456 log.go:172] (0xc000ad9290) (0xc0004d9180) Stream removed, broadcasting: 5\n" Jul 1 00:26:23.469: INFO: stdout: "affinity-clusterip-timeout-2ldzg" Jul 1 00:26:23.469: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9711, will wait for the garbage collector to delete the pods Jul 1 00:26:23.684: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 13.36649ms Jul 1 00:26:24.284: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.280428ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:26:35.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9711" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:58.091 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":294,"completed":150,"skipped":2557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:26:35.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5390 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 00:26:35.416: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 1 00:26:35.503: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 1 00:26:37.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 1 00:26:39.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 1 00:26:41.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:26:43.507: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:26:45.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:26:47.507: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:26:49.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:26:51.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:26:53.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:26:55.508: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 1 00:26:55.514: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 1 00:26:59.619: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.152:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5390 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:26:59.619: INFO: >>> kubeConfig: /root/.kube/config I0701 00:26:59.658177 8 log.go:172] (0xc001f6c4d0) (0xc001468e60) Create stream I0701 00:26:59.658208 8 log.go:172] (0xc001f6c4d0) (0xc001468e60) Stream added, broadcasting: 1 I0701 00:26:59.660079 8 log.go:172] (0xc001f6c4d0) Reply frame received for 1 I0701 00:26:59.660114 8 log.go:172] (0xc001f6c4d0) (0xc001bb0960) Create stream I0701 00:26:59.660128 8 log.go:172] (0xc001f6c4d0) (0xc001bb0960) Stream added, broadcasting: 3 I0701 00:26:59.661445 8 log.go:172] (0xc001f6c4d0) Reply frame received for 3 I0701 00:26:59.661495 8 log.go:172] (0xc001f6c4d0) (0xc001bb0a00) Create stream I0701 00:26:59.661508 8 log.go:172] (0xc001f6c4d0) (0xc001bb0a00) Stream added, broadcasting: 5 I0701 00:26:59.662467 8 log.go:172] (0xc001f6c4d0) Reply frame received for 5 I0701 00:26:59.746924 8 log.go:172] (0xc001f6c4d0) Data frame received for 3 I0701 00:26:59.746963 8 log.go:172] (0xc001bb0960) (3) Data frame handling I0701 00:26:59.746987 8 log.go:172] (0xc001bb0960) (3) Data frame sent I0701 00:26:59.747002 8 log.go:172] (0xc001f6c4d0) Data frame received for 3 I0701 00:26:59.747006 8 log.go:172] (0xc001bb0960) (3) Data frame handling I0701 00:26:59.747152 8 log.go:172] (0xc001f6c4d0) Data frame received for 5 I0701 00:26:59.747194 8 log.go:172] (0xc001bb0a00) (5) Data frame handling I0701 00:26:59.749732 8 log.go:172] (0xc001f6c4d0) Data frame received for 1 I0701 00:26:59.749774 8 log.go:172] (0xc001468e60) (1) Data frame handling I0701 00:26:59.749798 8 log.go:172] (0xc001468e60) (1) Data frame sent I0701 00:26:59.750151 8 log.go:172] (0xc001f6c4d0) (0xc001468e60) Stream removed, broadcasting: 1 I0701 00:26:59.750205 8 log.go:172] (0xc001f6c4d0) Go away received I0701 00:26:59.750241 8 log.go:172] (0xc001f6c4d0) (0xc001468e60) Stream removed, broadcasting: 1 I0701 00:26:59.750255 8 log.go:172] (0xc001f6c4d0) (0xc001bb0960) Stream removed, broadcasting: 3 I0701 00:26:59.750262 8 log.go:172] (0xc001f6c4d0) (0xc001bb0a00) Stream removed, broadcasting: 5 Jul 1 00:26:59.750: INFO: Found all expected endpoints: [netserver-0] Jul 1 00:26:59.753: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.195:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5390 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:26:59.753: INFO: >>> kubeConfig: /root/.kube/config I0701 00:26:59.789556 8 log.go:172] (0xc001eb5810) (0xc001742000) Create stream I0701 00:26:59.789605 8 log.go:172] (0xc001eb5810) (0xc001742000) Stream added, broadcasting: 1 I0701 00:26:59.792059 8 log.go:172] (0xc001eb5810) Reply frame received for 1 I0701 00:26:59.792105 8 log.go:172] (0xc001eb5810) (0xc001742280) Create stream I0701 00:26:59.792125 8 log.go:172] (0xc001eb5810) (0xc001742280) Stream added, broadcasting: 3 I0701 00:26:59.793341 8 log.go:172] (0xc001eb5810) Reply frame received for 3 I0701 00:26:59.793387 8 log.go:172] (0xc001eb5810) (0xc00266e000) Create stream I0701 00:26:59.793407 8 log.go:172] (0xc001eb5810) (0xc00266e000) Stream added, broadcasting: 5 I0701 00:26:59.794371 8 log.go:172] (0xc001eb5810) Reply frame received for 5 I0701 00:26:59.862853 8 log.go:172] (0xc001eb5810) Data frame received for 3 I0701 00:26:59.862882 8 log.go:172] (0xc001742280) (3) Data frame handling I0701 00:26:59.862893 8 log.go:172] (0xc001742280) (3) Data frame sent I0701 00:26:59.862898 8 log.go:172] (0xc001eb5810) Data frame received for 3 I0701 00:26:59.862907 8 log.go:172] (0xc001742280) (3) Data frame handling I0701 00:26:59.862973 8 log.go:172] (0xc001eb5810) Data frame received for 5 I0701 00:26:59.862996 8 log.go:172] (0xc00266e000) (5) Data frame handling I0701 00:26:59.864184 8 log.go:172] (0xc001eb5810) Data frame received for 1 I0701 00:26:59.864207 8 log.go:172] (0xc001742000) (1) Data frame handling I0701 00:26:59.864226 8 log.go:172] (0xc001742000) (1) Data frame sent I0701 00:26:59.864252 8 log.go:172] (0xc001eb5810) (0xc001742000) Stream removed, broadcasting: 1 I0701 00:26:59.864272 8 log.go:172] (0xc001eb5810) Go away received I0701 00:26:59.864393 8 log.go:172] (0xc001eb5810) (0xc001742000) Stream removed, broadcasting: 1 I0701 00:26:59.864425 8 log.go:172] (0xc001eb5810) (0xc001742280) Stream removed, broadcasting: 3 I0701 00:26:59.864439 8 log.go:172] (0xc001eb5810) (0xc00266e000) Stream removed, broadcasting: 5 Jul 1 00:26:59.864: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:26:59.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5390" for this suite. • [SLOW TEST:24.530 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":151,"skipped":2608,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:26:59.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-ac34ab5a-34d5-401b-a10c-4e840edd4435 STEP: Creating a pod to test consume configMaps Jul 1 00:26:59.985: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1" in namespace "projected-5252" to be "Succeeded or Failed" Jul 1 00:27:00.000: INFO: Pod "pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.554635ms Jul 1 00:27:02.004: INFO: Pod "pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018272106s Jul 1 00:27:04.008: INFO: Pod "pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023060643s STEP: Saw pod success Jul 1 00:27:04.008: INFO: Pod "pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1" satisfied condition "Succeeded or Failed" Jul 1 00:27:04.012: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1 container projected-configmap-volume-test: STEP: delete the pod Jul 1 00:27:04.079: INFO: Waiting for pod pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1 to disappear Jul 1 00:27:04.088: INFO: Pod pod-projected-configmaps-41e2e5e5-bc1d-411f-933f-6cde4780f0e1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:27:04.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5252" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":152,"skipped":2625,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:27:04.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:27:04.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6639" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":294,"completed":153,"skipped":2627,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:27:04.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:27:04.782: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:27:07.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:27:09.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160024, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:27:12.052: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:27:12.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1136-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:27:13.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4111" for this suite. STEP: Destroying namespace "webhook-4111-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":294,"completed":154,"skipped":2637,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:27:13.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:27:14.816: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:27:16.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:27:18.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160034, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:27:21.899: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API Jul 1 00:27:22.006: INFO: Waiting for webhook configuration to be ready... STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:27:32.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5048" for this suite. STEP: Destroying namespace "webhook-5048-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.086 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":294,"completed":155,"skipped":2653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:27:32.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7540.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7540.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 00:27:40.666: INFO: DNS probes using dns-test-b056b59d-4ca3-421c-bb35-8e9cf764c5e3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7540.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7540.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 00:27:48.831: INFO: File wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:27:48.834: INFO: File jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains '' instead of 'bar.example.com.' Jul 1 00:27:48.834: INFO: Lookups using dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 failed for: [wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local] Jul 1 00:27:53.840: INFO: File wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:27:53.845: INFO: File jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:27:53.845: INFO: Lookups using dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 failed for: [wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local] Jul 1 00:27:58.839: INFO: File wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:27:58.843: INFO: File jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:27:58.843: INFO: Lookups using dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 failed for: [wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local] Jul 1 00:28:03.838: INFO: File wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:28:03.841: INFO: File jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:28:03.841: INFO: Lookups using dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 failed for: [wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local] Jul 1 00:28:08.839: INFO: File wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:28:08.842: INFO: File jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local from pod dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 1 00:28:08.842: INFO: Lookups using dns-7540/dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 failed for: [wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local] Jul 1 00:28:13.844: INFO: DNS probes using dns-test-9215bd87-414d-4b31-b003-bfd3ed80cdc5 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7540.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7540.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7540.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7540.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 00:28:22.763: INFO: DNS probes using dns-test-c7ba5df7-482d-4ee7-bb01-6d65cbdff181 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:28:22.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7540" for this suite. • [SLOW TEST:50.406 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":294,"completed":156,"skipped":2681,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:28:22.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:28:24.193: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:28:26.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160104, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160104, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160104, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160104, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:28:29.267: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:28:29.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:28:30.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4741" for this suite. STEP: Destroying namespace "webhook-4741-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.758 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":294,"completed":157,"skipped":2682,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:28:30.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 00:28:34.891: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:28:34.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2882" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":158,"skipped":2682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:28:34.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-61621d77-837c-4ce9-8e33-4007beb93b42 in namespace container-probe-1921 Jul 1 00:28:39.053: INFO: Started pod test-webserver-61621d77-837c-4ce9-8e33-4007beb93b42 in namespace container-probe-1921 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 00:28:39.056: INFO: Initial restart count of pod test-webserver-61621d77-837c-4ce9-8e33-4007beb93b42 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:32:39.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1921" for this suite. • [SLOW TEST:244.783 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":159,"skipped":2731,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:32:39.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7ddcea08-ed5e-4a6c-9422-91d0cf23db4b STEP: Creating a pod to test consume configMaps Jul 1 00:32:40.161: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d" in namespace "projected-9874" to be "Succeeded or Failed" Jul 1 00:32:40.207: INFO: Pod "pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.382328ms Jul 1 00:32:42.211: INFO: Pod "pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050182667s Jul 1 00:32:44.216: INFO: Pod "pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d": Phase="Running", Reason="", readiness=true. Elapsed: 4.054760594s Jul 1 00:32:46.220: INFO: Pod "pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059310007s STEP: Saw pod success Jul 1 00:32:46.220: INFO: Pod "pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d" satisfied condition "Succeeded or Failed" Jul 1 00:32:46.224: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d container projected-configmap-volume-test: STEP: delete the pod Jul 1 00:32:46.275: INFO: Waiting for pod pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d to disappear Jul 1 00:32:46.290: INFO: Pod pod-projected-configmaps-1fb9f893-6cdb-4e53-92b2-c79a7bb7185d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:32:46.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9874" for this suite. • [SLOW TEST:6.552 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":160,"skipped":2740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:32:46.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:32:46.384: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:32:47.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2639" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":294,"completed":161,"skipped":2791,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:32:47.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 1 00:32:47.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3321' Jul 1 00:32:50.613: INFO: stderr: "" Jul 1 00:32:50.613: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jul 1 00:32:50.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-3321' Jul 1 00:32:50.739: INFO: stderr: "" Jul 1 00:32:50.739: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-01T00:32:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-01T00:32:50Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-07-01T00:32:50Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3321\",\n \"resourceVersion\": \"17248518\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3321/pods/e2e-test-httpd-pod\",\n \"uid\": \"733aabb0-e7a5-4337-ab23-0fc13334b35b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wv7v4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wv7v4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wv7v4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:32:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:32:50Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:32:50Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-01T00:32:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-01T00:32:50Z\"\n }\n}\n" Jul 1 00:32:50.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-3321' Jul 1 00:32:51.069: INFO: stderr: "W0701 00:32:50.809067 1529 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Jul 1 00:32:51.069: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jul 1 00:32:51.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3321' Jul 1 00:32:54.757: INFO: stderr: "" Jul 1 00:32:54.757: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:32:54.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3321" for this suite. • [SLOW TEST:7.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:902 should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":294,"completed":162,"skipped":2792,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:32:54.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:32:55.458: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:32:57.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160375, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160375, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160375, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160375, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:33:00.517: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:00.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3847" for this suite. STEP: Destroying namespace "webhook-3847-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.130 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":294,"completed":163,"skipped":2811,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:00.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:05.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1479" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":294,"completed":164,"skipped":2813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:05.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis Jul 1 00:33:06.560: FAIL: expected certificates API group/version, got []v1.APIGroup{v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"extensions", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apps", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"events.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authentication.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"autoscaling", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta1", Version:"v2beta1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta2", Version:"v2beta2"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"batch", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"batch/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"certificates.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"networking.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"policy", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"rbac.authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"storage.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"admissionregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiextensions.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"scheduling.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"coordination.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"node.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"discovery.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}} Expected : false to equal : true Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func2.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 +0x7ce k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a0af00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x360 k8s.io/kubernetes/test/e2e.TestE2E(0xc002a0af00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:141 +0x2b testing.tRunner(0xc002a0af00, 0x4e37068) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "certificates-3133". STEP: Found 0 events. Jul 1 00:33:06.565: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 00:33:06.565: INFO: Jul 1 00:33:06.569: INFO: Logging node info for node latest-control-plane Jul 1 00:33:06.571: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane b7c23ecc-1548-479e-83f7-eb5444fbe13d 17248215 0 2020-04-29 09:53:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:53:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-07-01 00:31:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:54:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.11,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3939cf129c9d4d6e85e611ab996d9137,SystemUUID:2573ae1d-4849-412e-9a34-432f95556990,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 1 00:33:06.571: INFO: Logging kubelet events for node latest-control-plane Jul 1 00:33:06.573: INFO: Logging pods the kubelet thinks is on node latest-control-plane Jul 1 00:33:06.595: INFO: etcd-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container etcd ready: true, restart count 4 Jul 1 00:33:06.595: INFO: kube-apiserver-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container kube-apiserver ready: true, restart count 3 Jul 1 00:33:06.595: INFO: kindnet-8x7pf started at 2020-04-29 09:53:53 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container kindnet-cni ready: true, restart count 5 Jul 1 00:33:06.595: INFO: coredns-66bff467f8-8n5vh started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container coredns ready: true, restart count 0 Jul 1 00:33:06.595: INFO: local-path-provisioner-bd4bb6b75-bmf2h started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container local-path-provisioner ready: true, restart count 94 Jul 1 00:33:06.595: INFO: kube-scheduler-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container kube-scheduler ready: true, restart count 124 Jul 1 00:33:06.595: INFO: kube-controller-manager-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container kube-controller-manager ready: true, restart count 128 Jul 1 00:33:06.595: INFO: kube-proxy-h8mhz started at 2020-04-29 09:53:54 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:33:06.595: INFO: coredns-66bff467f8-qr7l5 started at 2020-04-29 09:54:10 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.595: INFO: Container coredns ready: true, restart count 0 W0701 00:33:06.598440 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:33:06.680: INFO: Latency metrics for node latest-control-plane Jul 1 00:33:06.680: INFO: Logging node info for node latest-worker Jul 1 00:33:06.684: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 2f09bb79-b24c-46f4-8a0d-ace124db698c 17248213 0 2020-04-29 09:54:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2020-07-01 00:31:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 00:31:23 +0000 UTC,LastTransitionTime:2020-04-29 09:54:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.13,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:83dc4a3bd84a4693999c93a6c8c5f678,SystemUUID:66e94596-e77d-487e-8e4a-bc652b040cea,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85 docker.io/aquasec/kube-hunter:latest],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:c42be6eafdbe71363ad6a2035fe53f12dbe36aab19a1a3c015231e97cd11d986],SizeBytes:8039911,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:6da1996cf654bbc10175028832d6ffb92720272d0deca971bb296ea9092f4273],SizeBytes:8039845,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5979eaa13cb8b9b2027f4e75bb350a5af70d73719f2a260fa50f593ef63e857b docker.io/aquasec/kube-bench:latest],SizeBytes:8038593,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bab47f459428d6cc682ec6b7cffd4203ce53c413748fe366f2533d0cda2808ce],SizeBytes:8037981,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:cab37ac2de78ddbc6655eddae38239ebafdf79a7934bc53361e1524a2ed5ab56],SizeBytes:8035885,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:3a320776f9146d4efff6162d38f4d355e24cd852adb1ff5f8e32f1b23e4e33fa docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 1 00:33:06.686: INFO: Logging kubelet events for node latest-worker Jul 1 00:33:06.689: INFO: Logging pods the kubelet thinks is on node latest-worker Jul 1 00:33:06.697: INFO: kube-proxy-c8n27 started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.697: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:33:06.697: INFO: rally-c184502e-30nwopzm started at 2020-05-11 08:48:25 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.697: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jul 1 00:33:06.697: INFO: kindnet-hg2tf started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.697: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:33:06.697: INFO: rally-c184502e-30nwopzm-7fmqm started at 2020-05-11 08:48:29 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.697: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 W0701 00:33:06.701105 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:33:06.750: INFO: Latency metrics for node latest-worker Jul 1 00:33:06.750: INFO: Logging node info for node latest-worker2 Jul 1 00:33:06.754: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 edb8c16e-16f9-40fa-97b0-84ba80a01b1f 17248699 0 2020-04-29 09:54:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2020-07-01 00:33:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 00:33:02 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 00:33:02 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 00:33:02 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 00:33:02 +0000 UTC,LastTransitionTime:2020-04-29 09:54:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.12,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a92a0b35db3a4f1fb7e74bf96e498c99,SystemUUID:8fa82d10-b80f-4f70-a9ff-665f94ff4ecc,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:31a93c2501d1648258f610a15bbf40a41d4f10c319a621d5f8ab077d87fcf4b7 docker.io/aquasec/kube-hunter:latest],SizeBytes:127839307,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:d0af3efaa83cf2106879b7fd3972faaee44a0d4a82db97b27f33f8c71aa450b3],SizeBytes:127384616,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5 docker.io/aquasec/kube-bench:latest],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:3a320776f9146d4efff6162d38f4d355e24cd852adb1ff5f8e32f1b23e4e33fa docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339 docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 1 00:33:06.756: INFO: Logging kubelet events for node latest-worker2 Jul 1 00:33:06.759: INFO: Logging pods the kubelet thinks is on node latest-worker2 Jul 1 00:33:06.783: INFO: kindnet-jl4dn started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.783: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:33:06.783: INFO: kube-proxy-pcmmp started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.783: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:33:06.783: INFO: rally-c184502e-ept97j69-6xvbj started at 2020-05-11 08:48:03 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.783: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jul 1 00:33:06.783: INFO: webhook-to-be-mutated started at 2020-07-01 00:33:00 +0000 UTC (1+1 container statuses recorded) Jul 1 00:33:06.783: INFO: Init container webhook-added-init-container ready: false, restart count 0 Jul 1 00:33:06.783: INFO: Container example ready: false, restart count 0 Jul 1 00:33:06.783: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 started at 2020-05-12 09:11:35 +0000 UTC (0+1 container statuses recorded) Jul 1 00:33:06.783: INFO: Container terminate-cmd-rpa ready: true, restart count 2 W0701 00:33:06.788504 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:33:06.835: INFO: Latency metrics for node latest-worker2 Jul 1 00:33:06.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-3133" for this suite. • Failure [1.257 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:33:06.560: expected certificates API group/version, got []v1.APIGroup{v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"extensions", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"extensions/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apps", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apps/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"events.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"events.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authentication.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authentication.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"autoscaling", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta1", Version:"v2beta1"}, v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v2beta2", Version:"v2beta2"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"autoscaling/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"batch", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"batch/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"batch/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"certificates.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"certificates.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"networking.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"networking.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"policy", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"policy/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"rbac.authorization.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"rbac.authorization.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"storage.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"storage.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"admissionregistration.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"admissionregistration.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"apiextensions.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"apiextensions.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"scheduling.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"scheduling.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"coordination.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"coordination.k8s.io/v1", Version:"v1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"node.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"node.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}, v1.APIGroup{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Name:"discovery.k8s.io", Versions:[]v1.GroupVersionForDiscovery{v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}}, PreferredVersion:v1.GroupVersionForDiscovery{GroupVersion:"discovery.k8s.io/v1beta1", Version:"v1beta1"}, ServerAddressByClientCIDRs:[]v1.ServerAddressByClientCIDR(nil)}} Expected : false to equal : true /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 ------------------------------ {"msg":"FAILED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":294,"completed":164,"skipped":2844,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:06.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398 STEP: creating an pod Jul 1 00:33:06.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-4268 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 1 00:33:07.043: INFO: stderr: "" Jul 1 00:33:07.043: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Jul 1 00:33:07.043: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 1 00:33:07.043: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4268" to be "running and ready, or succeeded" Jul 1 00:33:07.066: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 22.679195ms Jul 1 00:33:09.131: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08752211s Jul 1 00:33:11.134: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.091121182s Jul 1 00:33:11.134: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 1 00:33:11.134: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 1 00:33:11.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4268' Jul 1 00:33:11.261: INFO: stderr: "" Jul 1 00:33:11.261: INFO: stdout: "I0701 00:33:10.071523 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/57d 353\nI0701 00:33:10.271745 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/rnn 544\nI0701 00:33:10.471754 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/7q2 293\nI0701 00:33:10.671688 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/r8br 429\nI0701 00:33:10.871814 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/wbl2 473\nI0701 00:33:11.071696 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/v8cc 518\n" STEP: limiting log lines Jul 1 00:33:11.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4268 --tail=1' Jul 1 00:33:11.381: INFO: stderr: "" Jul 1 00:33:11.381: INFO: stdout: "I0701 00:33:11.271718 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/rwpn 397\n" Jul 1 00:33:11.381: INFO: got output "I0701 00:33:11.271718 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/rwpn 397\n" STEP: limiting log bytes Jul 1 00:33:11.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4268 --limit-bytes=1' Jul 1 00:33:11.486: INFO: stderr: "" Jul 1 00:33:11.486: INFO: stdout: "I" Jul 1 00:33:11.486: INFO: got output "I" STEP: exposing timestamps Jul 1 00:33:11.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4268 --tail=1 --timestamps' Jul 1 00:33:11.591: INFO: stderr: "" Jul 1 00:33:11.591: INFO: stdout: "2020-07-01T00:33:11.471772296Z I0701 00:33:11.471647 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6cl9 428\n" Jul 1 00:33:11.591: INFO: got output "2020-07-01T00:33:11.471772296Z I0701 00:33:11.471647 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6cl9 428\n" STEP: restricting to a time range Jul 1 00:33:14.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4268 --since=1s' Jul 1 00:33:14.206: INFO: stderr: "" Jul 1 00:33:14.206: INFO: stdout: "I0701 00:33:13.271712 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/8wf 204\nI0701 00:33:13.471715 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/jhj 463\nI0701 00:33:13.671754 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/kh8t 246\nI0701 00:33:13.871754 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/llvd 327\nI0701 00:33:14.071689 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/tf2 482\n" Jul 1 00:33:14.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4268 --since=24h' Jul 1 00:33:14.336: INFO: stderr: "" Jul 1 00:33:14.336: INFO: stdout: "I0701 00:33:10.071523 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/57d 353\nI0701 00:33:10.271745 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/rnn 544\nI0701 00:33:10.471754 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/7q2 293\nI0701 00:33:10.671688 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/r8br 429\nI0701 00:33:10.871814 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/wbl2 473\nI0701 00:33:11.071696 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/v8cc 518\nI0701 00:33:11.271718 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/rwpn 397\nI0701 00:33:11.471647 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6cl9 428\nI0701 00:33:11.671797 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/xhj 316\nI0701 00:33:11.871685 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/b6rv 245\nI0701 00:33:12.071756 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/hkn 583\nI0701 00:33:12.271680 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/48d7 263\nI0701 00:33:12.471781 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/qqx 284\nI0701 00:33:12.671879 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/vj4h 215\nI0701 00:33:12.871732 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/rrn 487\nI0701 00:33:13.071737 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/vpv 523\nI0701 00:33:13.271712 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/8wf 204\nI0701 00:33:13.471715 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/jhj 463\nI0701 00:33:13.671754 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/kh8t 246\nI0701 00:33:13.871754 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/llvd 327\nI0701 00:33:14.071689 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/tf2 482\nI0701 00:33:14.271724 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/xmfb 387\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jul 1 00:33:14.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4268' Jul 1 00:33:24.870: INFO: stderr: "" Jul 1 00:33:24.870: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4268" for this suite. • [SLOW TEST:18.041 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1394 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":294,"completed":165,"skipped":2846,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:24.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:33:25.738: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:33:27.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160405, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160405, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160405, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160405, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:33:30.803: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:31.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1884" for this suite. STEP: Destroying namespace "webhook-1884-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.565 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":294,"completed":166,"skipped":2849,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:31.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-a3b9c500-50ff-4282-b237-08b9e724f33e STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:35.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9818" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":167,"skipped":2859,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:35.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:33:35.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353" in namespace "projected-4268" to be "Succeeded or Failed" Jul 1 00:33:35.737: INFO: Pod "downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353": Phase="Pending", Reason="", readiness=false. Elapsed: 15.612066ms Jul 1 00:33:37.741: INFO: Pod "downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020354625s Jul 1 00:33:39.746: INFO: Pod "downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025149653s STEP: Saw pod success Jul 1 00:33:39.746: INFO: Pod "downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353" satisfied condition "Succeeded or Failed" Jul 1 00:33:39.750: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353 container client-container: STEP: delete the pod Jul 1 00:33:39.793: INFO: Waiting for pod downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353 to disappear Jul 1 00:33:39.810: INFO: Pod downwardapi-volume-6dfa073a-f7df-4fda-972d-ba622fe19353 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:39.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4268" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":168,"skipped":2878,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:39.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339 Jul 1 00:33:39.972: INFO: Pod name my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339: Found 0 pods out of 1 Jul 1 00:33:44.978: INFO: Pod name my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339: Found 1 pods out of 1 Jul 1 00:33:44.978: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339" are running Jul 1 00:33:45.031: INFO: Pod "my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339-jp4bx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:33:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:33:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:33:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:33:40 +0000 UTC Reason: Message:}]) Jul 1 00:33:45.031: INFO: Trying to dial the pod Jul 1 00:33:50.044: INFO: Controller my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339: Got expected result from replica 1 [my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339-jp4bx]: "my-hostname-basic-ea951d47-c1f7-4b03-8e8c-f06ecfa19339-jp4bx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:50.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1071" for this suite. • [SLOW TEST:10.233 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":169,"skipped":2885,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:50.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 1 00:33:50.111: INFO: Waiting up to 5m0s for pod "pod-2920d572-12c2-468e-a067-76de9400090a" in namespace "emptydir-7606" to be "Succeeded or Failed" Jul 1 00:33:50.144: INFO: Pod "pod-2920d572-12c2-468e-a067-76de9400090a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.107046ms Jul 1 00:33:52.148: INFO: Pod "pod-2920d572-12c2-468e-a067-76de9400090a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037337794s Jul 1 00:33:54.153: INFO: Pod "pod-2920d572-12c2-468e-a067-76de9400090a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042033133s STEP: Saw pod success Jul 1 00:33:54.153: INFO: Pod "pod-2920d572-12c2-468e-a067-76de9400090a" satisfied condition "Succeeded or Failed" Jul 1 00:33:54.156: INFO: Trying to get logs from node latest-worker pod pod-2920d572-12c2-468e-a067-76de9400090a container test-container: STEP: delete the pod Jul 1 00:33:54.176: INFO: Waiting for pod pod-2920d572-12c2-468e-a067-76de9400090a to disappear Jul 1 00:33:54.180: INFO: Pod pod-2920d572-12c2-468e-a067-76de9400090a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:54.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7606" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":170,"skipped":2885,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:54.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:33:54.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0" in namespace "downward-api-3436" to be "Succeeded or Failed" Jul 1 00:33:54.364: INFO: Pod "downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0": Phase="Pending", Reason="", readiness=false. Elapsed: 43.985277ms Jul 1 00:33:56.368: INFO: Pod "downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048293689s Jul 1 00:33:58.373: INFO: Pod "downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053245808s STEP: Saw pod success Jul 1 00:33:58.373: INFO: Pod "downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0" satisfied condition "Succeeded or Failed" Jul 1 00:33:58.376: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0 container client-container: STEP: delete the pod Jul 1 00:33:58.416: INFO: Waiting for pod downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0 to disappear Jul 1 00:33:58.423: INFO: Pod downwardapi-volume-c8215ffa-b9a8-4d15-b06a-8f398d0412e0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:33:58.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3436" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":171,"skipped":2886,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:33:58.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:33:58.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe" in namespace "projected-3338" to be "Succeeded or Failed" Jul 1 00:33:58.524: INFO: Pod "downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.340734ms Jul 1 00:34:00.528: INFO: Pod "downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007885542s Jul 1 00:34:02.533: INFO: Pod "downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012631524s STEP: Saw pod success Jul 1 00:34:02.533: INFO: Pod "downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe" satisfied condition "Succeeded or Failed" Jul 1 00:34:02.536: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe container client-container: STEP: delete the pod Jul 1 00:34:02.571: INFO: Waiting for pod downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe to disappear Jul 1 00:34:02.611: INFO: Pod downwardapi-volume-4cb10d82-0c48-4285-818d-74696e0fafbe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:34:02.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3338" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":294,"completed":172,"skipped":2990,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:34:02.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-7632 STEP: creating replication controller nodeport-test in namespace services-7632 I0701 00:34:02.799804 8 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7632, replica count: 2 I0701 00:34:05.850230 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:34:08.850489 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:34:08.850: INFO: Creating new exec pod Jul 1 00:34:13.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7632 execpodtp77t -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jul 1 00:34:14.137: INFO: stderr: "I0701 00:34:14.035014 1731 log.go:172] (0xc0003d0fd0) (0xc000c72640) Create stream\nI0701 00:34:14.035089 1731 log.go:172] (0xc0003d0fd0) (0xc000c72640) Stream added, broadcasting: 1\nI0701 00:34:14.039016 1731 log.go:172] (0xc0003d0fd0) Reply frame received for 1\nI0701 00:34:14.039056 1731 log.go:172] (0xc0003d0fd0) (0xc0000c9900) Create stream\nI0701 00:34:14.039067 1731 log.go:172] (0xc0003d0fd0) (0xc0000c9900) Stream added, broadcasting: 3\nI0701 00:34:14.039978 1731 log.go:172] (0xc0003d0fd0) Reply frame received for 3\nI0701 00:34:14.040019 1731 log.go:172] (0xc0003d0fd0) (0xc0006f0c80) Create stream\nI0701 00:34:14.040038 1731 log.go:172] (0xc0003d0fd0) (0xc0006f0c80) Stream added, broadcasting: 5\nI0701 00:34:14.041005 1731 log.go:172] (0xc0003d0fd0) Reply frame received for 5\nI0701 00:34:14.108393 1731 log.go:172] (0xc0003d0fd0) Data frame received for 5\nI0701 00:34:14.108420 1731 log.go:172] (0xc0006f0c80) (5) Data frame handling\nI0701 00:34:14.108439 1731 log.go:172] (0xc0006f0c80) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0701 00:34:14.129706 1731 log.go:172] (0xc0003d0fd0) Data frame received for 5\nI0701 00:34:14.129741 1731 log.go:172] (0xc0006f0c80) (5) Data frame handling\nI0701 00:34:14.129775 1731 log.go:172] (0xc0006f0c80) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0701 00:34:14.129969 1731 log.go:172] (0xc0003d0fd0) Data frame received for 3\nI0701 00:34:14.130003 1731 log.go:172] (0xc0000c9900) (3) Data frame handling\nI0701 00:34:14.130037 1731 log.go:172] (0xc0003d0fd0) Data frame received for 5\nI0701 00:34:14.130062 1731 log.go:172] (0xc0006f0c80) (5) Data frame handling\nI0701 00:34:14.131475 1731 log.go:172] (0xc0003d0fd0) Data frame received for 1\nI0701 00:34:14.131502 1731 log.go:172] (0xc000c72640) (1) Data frame handling\nI0701 00:34:14.131523 1731 log.go:172] (0xc000c72640) (1) Data frame sent\nI0701 00:34:14.131546 1731 log.go:172] (0xc0003d0fd0) (0xc000c72640) Stream removed, broadcasting: 1\nI0701 00:34:14.131568 1731 log.go:172] (0xc0003d0fd0) Go away received\nI0701 00:34:14.131903 1731 log.go:172] (0xc0003d0fd0) (0xc000c72640) Stream removed, broadcasting: 1\nI0701 00:34:14.131922 1731 log.go:172] (0xc0003d0fd0) (0xc0000c9900) Stream removed, broadcasting: 3\nI0701 00:34:14.131931 1731 log.go:172] (0xc0003d0fd0) (0xc0006f0c80) Stream removed, broadcasting: 5\n" Jul 1 00:34:14.137: INFO: stdout: "" Jul 1 00:34:14.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7632 execpodtp77t -- /bin/sh -x -c nc -zv -t -w 2 10.101.101.151 80' Jul 1 00:34:14.364: INFO: stderr: "I0701 00:34:14.281972 1752 log.go:172] (0xc00003a580) (0xc0008834a0) Create stream\nI0701 00:34:14.282043 1752 log.go:172] (0xc00003a580) (0xc0008834a0) Stream added, broadcasting: 1\nI0701 00:34:14.283771 1752 log.go:172] (0xc00003a580) Reply frame received for 1\nI0701 00:34:14.283803 1752 log.go:172] (0xc00003a580) (0xc000883c20) Create stream\nI0701 00:34:14.283815 1752 log.go:172] (0xc00003a580) (0xc000883c20) Stream added, broadcasting: 3\nI0701 00:34:14.284516 1752 log.go:172] (0xc00003a580) Reply frame received for 3\nI0701 00:34:14.284558 1752 log.go:172] (0xc00003a580) (0xc000878c80) Create stream\nI0701 00:34:14.284580 1752 log.go:172] (0xc00003a580) (0xc000878c80) Stream added, broadcasting: 5\nI0701 00:34:14.285548 1752 log.go:172] (0xc00003a580) Reply frame received for 5\nI0701 00:34:14.355224 1752 log.go:172] (0xc00003a580) Data frame received for 5\nI0701 00:34:14.355263 1752 log.go:172] (0xc000878c80) (5) Data frame handling\nI0701 00:34:14.355280 1752 log.go:172] (0xc000878c80) (5) Data frame sent\nI0701 00:34:14.355292 1752 log.go:172] (0xc00003a580) Data frame received for 5\nI0701 00:34:14.355301 1752 log.go:172] (0xc000878c80) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.101.151 80\nConnection to 10.101.101.151 80 port [tcp/http] succeeded!\nI0701 00:34:14.355336 1752 log.go:172] (0xc00003a580) Data frame received for 3\nI0701 00:34:14.355347 1752 log.go:172] (0xc000883c20) (3) Data frame handling\nI0701 00:34:14.356944 1752 log.go:172] (0xc00003a580) Data frame received for 1\nI0701 00:34:14.356972 1752 log.go:172] (0xc0008834a0) (1) Data frame handling\nI0701 00:34:14.356989 1752 log.go:172] (0xc0008834a0) (1) Data frame sent\nI0701 00:34:14.357005 1752 log.go:172] (0xc00003a580) (0xc0008834a0) Stream removed, broadcasting: 1\nI0701 00:34:14.357021 1752 log.go:172] (0xc00003a580) Go away received\nI0701 00:34:14.357770 1752 log.go:172] (0xc00003a580) (0xc0008834a0) Stream removed, broadcasting: 1\nI0701 00:34:14.357795 1752 log.go:172] (0xc00003a580) (0xc000883c20) Stream removed, broadcasting: 3\nI0701 00:34:14.357811 1752 log.go:172] (0xc00003a580) (0xc000878c80) Stream removed, broadcasting: 5\n" Jul 1 00:34:14.364: INFO: stdout: "" Jul 1 00:34:14.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7632 execpodtp77t -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32311' Jul 1 00:34:14.569: INFO: stderr: "I0701 00:34:14.498136 1775 log.go:172] (0xc000515b80) (0xc000ad25a0) Create stream\nI0701 00:34:14.498199 1775 log.go:172] (0xc000515b80) (0xc000ad25a0) Stream added, broadcasting: 1\nI0701 00:34:14.502718 1775 log.go:172] (0xc000515b80) Reply frame received for 1\nI0701 00:34:14.502762 1775 log.go:172] (0xc000515b80) (0xc0003da1e0) Create stream\nI0701 00:34:14.502781 1775 log.go:172] (0xc000515b80) (0xc0003da1e0) Stream added, broadcasting: 3\nI0701 00:34:14.503593 1775 log.go:172] (0xc000515b80) Reply frame received for 3\nI0701 00:34:14.503644 1775 log.go:172] (0xc000515b80) (0xc0000ddb80) Create stream\nI0701 00:34:14.503671 1775 log.go:172] (0xc000515b80) (0xc0000ddb80) Stream added, broadcasting: 5\nI0701 00:34:14.504462 1775 log.go:172] (0xc000515b80) Reply frame received for 5\nI0701 00:34:14.557566 1775 log.go:172] (0xc000515b80) Data frame received for 3\nI0701 00:34:14.557595 1775 log.go:172] (0xc0003da1e0) (3) Data frame handling\nI0701 00:34:14.557811 1775 log.go:172] (0xc000515b80) Data frame received for 5\nI0701 00:34:14.557832 1775 log.go:172] (0xc0000ddb80) (5) Data frame handling\nI0701 00:34:14.557863 1775 log.go:172] (0xc0000ddb80) (5) Data frame sent\nI0701 00:34:14.557879 1775 log.go:172] (0xc000515b80) Data frame received for 5\nI0701 00:34:14.557895 1775 log.go:172] (0xc0000ddb80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32311\nConnection to 172.17.0.13 32311 port [tcp/32311] succeeded!\nI0701 00:34:14.559379 1775 log.go:172] (0xc000515b80) Data frame received for 1\nI0701 00:34:14.559409 1775 log.go:172] (0xc000ad25a0) (1) Data frame handling\nI0701 00:34:14.559431 1775 log.go:172] (0xc000ad25a0) (1) Data frame sent\nI0701 00:34:14.559451 1775 log.go:172] (0xc000515b80) (0xc000ad25a0) Stream removed, broadcasting: 1\nI0701 00:34:14.559487 1775 log.go:172] (0xc000515b80) Go away received\nI0701 00:34:14.559914 1775 log.go:172] (0xc000515b80) (0xc000ad25a0) Stream removed, broadcasting: 1\nI0701 00:34:14.559936 1775 log.go:172] (0xc000515b80) (0xc0003da1e0) Stream removed, broadcasting: 3\nI0701 00:34:14.559947 1775 log.go:172] (0xc000515b80) (0xc0000ddb80) Stream removed, broadcasting: 5\n" Jul 1 00:34:14.569: INFO: stdout: "" Jul 1 00:34:14.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7632 execpodtp77t -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32311' Jul 1 00:34:14.788: INFO: stderr: "I0701 00:34:14.702475 1797 log.go:172] (0xc000a911e0) (0xc0006e0c80) Create stream\nI0701 00:34:14.702552 1797 log.go:172] (0xc000a911e0) (0xc0006e0c80) Stream added, broadcasting: 1\nI0701 00:34:14.704895 1797 log.go:172] (0xc000a911e0) Reply frame received for 1\nI0701 00:34:14.704943 1797 log.go:172] (0xc000a911e0) (0xc000ac8140) Create stream\nI0701 00:34:14.704955 1797 log.go:172] (0xc000a911e0) (0xc000ac8140) Stream added, broadcasting: 3\nI0701 00:34:14.706355 1797 log.go:172] (0xc000a911e0) Reply frame received for 3\nI0701 00:34:14.706398 1797 log.go:172] (0xc000a911e0) (0xc000699040) Create stream\nI0701 00:34:14.706408 1797 log.go:172] (0xc000a911e0) (0xc000699040) Stream added, broadcasting: 5\nI0701 00:34:14.707194 1797 log.go:172] (0xc000a911e0) Reply frame received for 5\nI0701 00:34:14.778378 1797 log.go:172] (0xc000a911e0) Data frame received for 5\nI0701 00:34:14.778446 1797 log.go:172] (0xc000699040) (5) Data frame handling\nI0701 00:34:14.778471 1797 log.go:172] (0xc000699040) (5) Data frame sent\nI0701 00:34:14.778488 1797 log.go:172] (0xc000a911e0) Data frame received for 5\nI0701 00:34:14.778504 1797 log.go:172] (0xc000699040) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32311\nConnection to 172.17.0.12 32311 port [tcp/32311] succeeded!\nI0701 00:34:14.778540 1797 log.go:172] (0xc000a911e0) Data frame received for 3\nI0701 00:34:14.778560 1797 log.go:172] (0xc000ac8140) (3) Data frame handling\nI0701 00:34:14.779941 1797 log.go:172] (0xc000a911e0) Data frame received for 1\nI0701 00:34:14.779965 1797 log.go:172] (0xc0006e0c80) (1) Data frame handling\nI0701 00:34:14.779979 1797 log.go:172] (0xc0006e0c80) (1) Data frame sent\nI0701 00:34:14.780001 1797 log.go:172] (0xc000a911e0) (0xc0006e0c80) Stream removed, broadcasting: 1\nI0701 00:34:14.780021 1797 log.go:172] (0xc000a911e0) Go away received\nI0701 00:34:14.780499 1797 log.go:172] (0xc000a911e0) (0xc0006e0c80) Stream removed, broadcasting: 1\nI0701 00:34:14.780529 1797 log.go:172] (0xc000a911e0) (0xc000ac8140) Stream removed, broadcasting: 3\nI0701 00:34:14.780540 1797 log.go:172] (0xc000a911e0) (0xc000699040) Stream removed, broadcasting: 5\n" Jul 1 00:34:14.788: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:34:14.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7632" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:12.177 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":294,"completed":173,"skipped":2993,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:34:14.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:34:26.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3635" for this suite. • [SLOW TEST:11.258 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":294,"completed":174,"skipped":3001,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:34:26.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-ff78520e-ff83-48fc-a214-83e71c15f5d3 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:34:26.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8420" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":294,"completed":175,"skipped":3008,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:34:26.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:34:26.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:34:28.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160467, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160467, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160467, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160466, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:34:30.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160467, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160467, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160467, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729160466, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:34:33.998: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:34:34.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8897" for this suite. STEP: Destroying namespace "webhook-8897-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.073 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":294,"completed":176,"skipped":3024,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:34:34.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-09994b73-50fa-461b-8a4e-6cc4883881e5 in namespace container-probe-8673 Jul 1 00:34:38.379: INFO: Started pod liveness-09994b73-50fa-461b-8a4e-6cc4883881e5 in namespace container-probe-8673 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 00:34:38.383: INFO: Initial restart count of pod liveness-09994b73-50fa-461b-8a4e-6cc4883881e5 is 0 Jul 1 00:34:58.460: INFO: Restart count of pod container-probe-8673/liveness-09994b73-50fa-461b-8a4e-6cc4883881e5 is now 1 (20.076970237s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:34:58.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8673" for this suite. • [SLOW TEST:24.295 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":294,"completed":177,"skipped":3071,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:34:58.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Jul 1 00:34:58.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' Jul 1 00:34:59.045: INFO: stderr: "" Jul 1 00:34:59.045: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:34:59.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3041" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":294,"completed":178,"skipped":3071,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:34:59.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 1 00:34:59.774: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 00:34:59.794: INFO: Waiting for terminating namespaces to be deleted... Jul 1 00:34:59.798: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 1 00:34:59.803: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.803: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jul 1 00:34:59.803: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.803: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jul 1 00:34:59.803: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.803: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:34:59.803: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.803: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:34:59.803: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 1 00:34:59.807: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.808: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jul 1 00:34:59.808: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.808: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jul 1 00:34:59.808: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.808: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:34:59.808: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:34:59.808: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f8ccf95a-9ec2-4c53-be0a-15bdc8fc8d9f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-f8ccf95a-9ec2-4c53-be0a-15bdc8fc8d9f off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f8ccf95a-9ec2-4c53-be0a-15bdc8fc8d9f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:35:16.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1773" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.951 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":294,"completed":179,"skipped":3071,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:35:16.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Jul 1 00:37:16.842: INFO: Successfully updated pod "var-expansion-038db221-a9e9-4cab-aaf5-da6078595063" STEP: waiting for pod running STEP: deleting the pod gracefully Jul 1 00:37:18.878: INFO: Deleting pod "var-expansion-038db221-a9e9-4cab-aaf5-da6078595063" in namespace "var-expansion-1594" Jul 1 00:37:18.883: INFO: Wait up to 5m0s for pod "var-expansion-038db221-a9e9-4cab-aaf5-da6078595063" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:37:56.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1594" for this suite. • [SLOW TEST:160.850 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":294,"completed":180,"skipped":3071,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:37:56.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-560 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-560 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-560 Jul 1 00:37:57.101: INFO: Found 0 stateful pods, waiting for 1 Jul 1 00:38:07.108: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jul 1 00:38:07.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 00:38:07.442: INFO: stderr: "I0701 00:38:07.243976 1835 log.go:172] (0xc000a51340) (0xc000699cc0) Create stream\nI0701 00:38:07.244044 1835 log.go:172] (0xc000a51340) (0xc000699cc0) Stream added, broadcasting: 1\nI0701 00:38:07.248541 1835 log.go:172] (0xc000a51340) Reply frame received for 1\nI0701 00:38:07.248583 1835 log.go:172] (0xc000a51340) (0xc0006485a0) Create stream\nI0701 00:38:07.248595 1835 log.go:172] (0xc000a51340) (0xc0006485a0) Stream added, broadcasting: 3\nI0701 00:38:07.249508 1835 log.go:172] (0xc000a51340) Reply frame received for 3\nI0701 00:38:07.249532 1835 log.go:172] (0xc000a51340) (0xc00055c0a0) Create stream\nI0701 00:38:07.249540 1835 log.go:172] (0xc000a51340) (0xc00055c0a0) Stream added, broadcasting: 5\nI0701 00:38:07.250171 1835 log.go:172] (0xc000a51340) Reply frame received for 5\nI0701 00:38:07.348093 1835 log.go:172] (0xc000a51340) Data frame received for 5\nI0701 00:38:07.348116 1835 log.go:172] (0xc00055c0a0) (5) Data frame handling\nI0701 00:38:07.348132 1835 log.go:172] (0xc00055c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 00:38:07.434195 1835 log.go:172] (0xc000a51340) Data frame received for 3\nI0701 00:38:07.434224 1835 log.go:172] (0xc0006485a0) (3) Data frame handling\nI0701 00:38:07.434249 1835 log.go:172] (0xc0006485a0) (3) Data frame sent\nI0701 00:38:07.434260 1835 log.go:172] (0xc000a51340) Data frame received for 3\nI0701 00:38:07.434267 1835 log.go:172] (0xc0006485a0) (3) Data frame handling\nI0701 00:38:07.434385 1835 log.go:172] (0xc000a51340) Data frame received for 5\nI0701 00:38:07.434409 1835 log.go:172] (0xc00055c0a0) (5) Data frame handling\nI0701 00:38:07.435838 1835 log.go:172] (0xc000a51340) Data frame received for 1\nI0701 00:38:07.435852 1835 log.go:172] (0xc000699cc0) (1) Data frame handling\nI0701 00:38:07.435862 1835 log.go:172] (0xc000699cc0) (1) Data frame sent\nI0701 00:38:07.435869 1835 log.go:172] (0xc000a51340) (0xc000699cc0) Stream removed, broadcasting: 1\nI0701 00:38:07.436084 1835 log.go:172] (0xc000a51340) (0xc000699cc0) Stream removed, broadcasting: 1\nI0701 00:38:07.436094 1835 log.go:172] (0xc000a51340) (0xc0006485a0) Stream removed, broadcasting: 3\nI0701 00:38:07.436100 1835 log.go:172] (0xc000a51340) (0xc00055c0a0) Stream removed, broadcasting: 5\nI0701 00:38:07.436115 1835 log.go:172] (0xc000a51340) Go away received\n" Jul 1 00:38:07.442: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 00:38:07.442: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 00:38:07.445: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 1 00:38:17.450: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 00:38:17.450: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 00:38:17.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999751s Jul 1 00:38:18.500: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984942894s Jul 1 00:38:19.504: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.959351137s Jul 1 00:38:20.508: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.955332146s Jul 1 00:38:21.514: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.951180102s Jul 1 00:38:22.519: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.945714527s Jul 1 00:38:23.523: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.940651109s Jul 1 00:38:24.528: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.936061667s Jul 1 00:38:25.533: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.93149832s Jul 1 00:38:26.538: INFO: Verifying statefulset ss doesn't scale past 1 for another 926.592523ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-560 Jul 1 00:38:27.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 00:38:27.751: INFO: stderr: "I0701 00:38:27.681849 1855 log.go:172] (0xc000b754a0) (0xc0006952c0) Create stream\nI0701 00:38:27.681902 1855 log.go:172] (0xc000b754a0) (0xc0006952c0) Stream added, broadcasting: 1\nI0701 00:38:27.685428 1855 log.go:172] (0xc000b754a0) Reply frame received for 1\nI0701 00:38:27.685464 1855 log.go:172] (0xc000b754a0) (0xc000674280) Create stream\nI0701 00:38:27.685480 1855 log.go:172] (0xc000b754a0) (0xc000674280) Stream added, broadcasting: 3\nI0701 00:38:27.686432 1855 log.go:172] (0xc000b754a0) Reply frame received for 3\nI0701 00:38:27.686471 1855 log.go:172] (0xc000b754a0) (0xc000610280) Create stream\nI0701 00:38:27.686484 1855 log.go:172] (0xc000b754a0) (0xc000610280) Stream added, broadcasting: 5\nI0701 00:38:27.687184 1855 log.go:172] (0xc000b754a0) Reply frame received for 5\nI0701 00:38:27.744932 1855 log.go:172] (0xc000b754a0) Data frame received for 3\nI0701 00:38:27.744971 1855 log.go:172] (0xc000674280) (3) Data frame handling\nI0701 00:38:27.744993 1855 log.go:172] (0xc000674280) (3) Data frame sent\nI0701 00:38:27.745017 1855 log.go:172] (0xc000b754a0) Data frame received for 3\nI0701 00:38:27.745034 1855 log.go:172] (0xc000674280) (3) Data frame handling\nI0701 00:38:27.745103 1855 log.go:172] (0xc000b754a0) Data frame received for 5\nI0701 00:38:27.745256 1855 log.go:172] (0xc000610280) (5) Data frame handling\nI0701 00:38:27.745266 1855 log.go:172] (0xc000610280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 00:38:27.745427 1855 log.go:172] (0xc000b754a0) Data frame received for 5\nI0701 00:38:27.745463 1855 log.go:172] (0xc000610280) (5) Data frame handling\nI0701 00:38:27.747013 1855 log.go:172] (0xc000b754a0) Data frame received for 1\nI0701 00:38:27.747026 1855 log.go:172] (0xc0006952c0) (1) Data frame handling\nI0701 00:38:27.747036 1855 log.go:172] (0xc0006952c0) (1) Data frame sent\nI0701 00:38:27.747134 1855 log.go:172] (0xc000b754a0) (0xc0006952c0) Stream removed, broadcasting: 1\nI0701 00:38:27.747319 1855 log.go:172] (0xc000b754a0) Go away received\nI0701 00:38:27.747393 1855 log.go:172] (0xc000b754a0) (0xc0006952c0) Stream removed, broadcasting: 1\nI0701 00:38:27.747409 1855 log.go:172] (0xc000b754a0) (0xc000674280) Stream removed, broadcasting: 3\nI0701 00:38:27.747421 1855 log.go:172] (0xc000b754a0) (0xc000610280) Stream removed, broadcasting: 5\n" Jul 1 00:38:27.751: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 00:38:27.751: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 00:38:27.755: INFO: Found 1 stateful pods, waiting for 3 Jul 1 00:38:37.760: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:38:37.761: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:38:37.761: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jul 1 00:38:37.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 00:38:37.970: INFO: stderr: "I0701 00:38:37.900203 1876 log.go:172] (0xc0009f7290) (0xc0005b4a00) Create stream\nI0701 00:38:37.900277 1876 log.go:172] (0xc0009f7290) (0xc0005b4a00) Stream added, broadcasting: 1\nI0701 00:38:37.904721 1876 log.go:172] (0xc0009f7290) Reply frame received for 1\nI0701 00:38:37.904761 1876 log.go:172] (0xc0009f7290) (0xc000359cc0) Create stream\nI0701 00:38:37.904773 1876 log.go:172] (0xc0009f7290) (0xc000359cc0) Stream added, broadcasting: 3\nI0701 00:38:37.905792 1876 log.go:172] (0xc0009f7290) Reply frame received for 3\nI0701 00:38:37.905819 1876 log.go:172] (0xc0009f7290) (0xc0005737c0) Create stream\nI0701 00:38:37.905829 1876 log.go:172] (0xc0009f7290) (0xc0005737c0) Stream added, broadcasting: 5\nI0701 00:38:37.906855 1876 log.go:172] (0xc0009f7290) Reply frame received for 5\nI0701 00:38:37.961231 1876 log.go:172] (0xc0009f7290) Data frame received for 5\nI0701 00:38:37.961268 1876 log.go:172] (0xc0005737c0) (5) Data frame handling\nI0701 00:38:37.961279 1876 log.go:172] (0xc0005737c0) (5) Data frame sent\nI0701 00:38:37.961286 1876 log.go:172] (0xc0009f7290) Data frame received for 5\nI0701 00:38:37.961291 1876 log.go:172] (0xc0005737c0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 00:38:37.961372 1876 log.go:172] (0xc0009f7290) Data frame received for 3\nI0701 00:38:37.961389 1876 log.go:172] (0xc000359cc0) (3) Data frame handling\nI0701 00:38:37.961404 1876 log.go:172] (0xc000359cc0) (3) Data frame sent\nI0701 00:38:37.961414 1876 log.go:172] (0xc0009f7290) Data frame received for 3\nI0701 00:38:37.961422 1876 log.go:172] (0xc000359cc0) (3) Data frame handling\nI0701 00:38:37.962931 1876 log.go:172] (0xc0009f7290) Data frame received for 1\nI0701 00:38:37.962954 1876 log.go:172] (0xc0005b4a00) (1) Data frame handling\nI0701 00:38:37.962974 1876 log.go:172] (0xc0005b4a00) (1) Data frame sent\nI0701 00:38:37.962985 1876 log.go:172] (0xc0009f7290) (0xc0005b4a00) Stream removed, broadcasting: 1\nI0701 00:38:37.963005 1876 log.go:172] (0xc0009f7290) Go away received\nI0701 00:38:37.963607 1876 log.go:172] (0xc0009f7290) (0xc0005b4a00) Stream removed, broadcasting: 1\nI0701 00:38:37.963635 1876 log.go:172] (0xc0009f7290) (0xc000359cc0) Stream removed, broadcasting: 3\nI0701 00:38:37.963646 1876 log.go:172] (0xc0009f7290) (0xc0005737c0) Stream removed, broadcasting: 5\n" Jul 1 00:38:37.970: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 00:38:37.970: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 00:38:37.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 00:38:38.232: INFO: stderr: "I0701 00:38:38.102655 1898 log.go:172] (0xc000a39290) (0xc00086f720) Create stream\nI0701 00:38:38.102715 1898 log.go:172] (0xc000a39290) (0xc00086f720) Stream added, broadcasting: 1\nI0701 00:38:38.106958 1898 log.go:172] (0xc000a39290) Reply frame received for 1\nI0701 00:38:38.107014 1898 log.go:172] (0xc000a39290) (0xc0007068c0) Create stream\nI0701 00:38:38.107032 1898 log.go:172] (0xc000a39290) (0xc0007068c0) Stream added, broadcasting: 3\nI0701 00:38:38.108194 1898 log.go:172] (0xc000a39290) Reply frame received for 3\nI0701 00:38:38.108227 1898 log.go:172] (0xc000a39290) (0xc000693720) Create stream\nI0701 00:38:38.108238 1898 log.go:172] (0xc000a39290) (0xc000693720) Stream added, broadcasting: 5\nI0701 00:38:38.109537 1898 log.go:172] (0xc000a39290) Reply frame received for 5\nI0701 00:38:38.188624 1898 log.go:172] (0xc000a39290) Data frame received for 5\nI0701 00:38:38.188662 1898 log.go:172] (0xc000693720) (5) Data frame handling\nI0701 00:38:38.188690 1898 log.go:172] (0xc000693720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 00:38:38.222067 1898 log.go:172] (0xc000a39290) Data frame received for 3\nI0701 00:38:38.222112 1898 log.go:172] (0xc0007068c0) (3) Data frame handling\nI0701 00:38:38.222137 1898 log.go:172] (0xc0007068c0) (3) Data frame sent\nI0701 00:38:38.222152 1898 log.go:172] (0xc000a39290) Data frame received for 3\nI0701 00:38:38.222173 1898 log.go:172] (0xc0007068c0) (3) Data frame handling\nI0701 00:38:38.222296 1898 log.go:172] (0xc000a39290) Data frame received for 5\nI0701 00:38:38.222319 1898 log.go:172] (0xc000693720) (5) Data frame handling\nI0701 00:38:38.224244 1898 log.go:172] (0xc000a39290) Data frame received for 1\nI0701 00:38:38.224269 1898 log.go:172] (0xc00086f720) (1) Data frame handling\nI0701 00:38:38.224293 1898 log.go:172] (0xc00086f720) (1) Data frame sent\nI0701 00:38:38.224312 1898 log.go:172] (0xc000a39290) (0xc00086f720) Stream removed, broadcasting: 1\nI0701 00:38:38.224332 1898 log.go:172] (0xc000a39290) Go away received\nI0701 00:38:38.224700 1898 log.go:172] (0xc000a39290) (0xc00086f720) Stream removed, broadcasting: 1\nI0701 00:38:38.224726 1898 log.go:172] (0xc000a39290) (0xc0007068c0) Stream removed, broadcasting: 3\nI0701 00:38:38.224736 1898 log.go:172] (0xc000a39290) (0xc000693720) Stream removed, broadcasting: 5\n" Jul 1 00:38:38.232: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 00:38:38.232: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 00:38:38.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 00:38:38.510: INFO: stderr: "I0701 00:38:38.400415 1918 log.go:172] (0xc0009ad970) (0xc000822c80) Create stream\nI0701 00:38:38.400468 1918 log.go:172] (0xc0009ad970) (0xc000822c80) Stream added, broadcasting: 1\nI0701 00:38:38.403024 1918 log.go:172] (0xc0009ad970) Reply frame received for 1\nI0701 00:38:38.403083 1918 log.go:172] (0xc0009ad970) (0xc000af01e0) Create stream\nI0701 00:38:38.403100 1918 log.go:172] (0xc0009ad970) (0xc000af01e0) Stream added, broadcasting: 3\nI0701 00:38:38.403851 1918 log.go:172] (0xc0009ad970) Reply frame received for 3\nI0701 00:38:38.403882 1918 log.go:172] (0xc0009ad970) (0xc0006e8a00) Create stream\nI0701 00:38:38.403890 1918 log.go:172] (0xc0009ad970) (0xc0006e8a00) Stream added, broadcasting: 5\nI0701 00:38:38.404675 1918 log.go:172] (0xc0009ad970) Reply frame received for 5\nI0701 00:38:38.467592 1918 log.go:172] (0xc0009ad970) Data frame received for 5\nI0701 00:38:38.467618 1918 log.go:172] (0xc0006e8a00) (5) Data frame handling\nI0701 00:38:38.467633 1918 log.go:172] (0xc0006e8a00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 00:38:38.497993 1918 log.go:172] (0xc0009ad970) Data frame received for 5\nI0701 00:38:38.498043 1918 log.go:172] (0xc0006e8a00) (5) Data frame handling\nI0701 00:38:38.498077 1918 log.go:172] (0xc0009ad970) Data frame received for 3\nI0701 00:38:38.498092 1918 log.go:172] (0xc000af01e0) (3) Data frame handling\nI0701 00:38:38.498117 1918 log.go:172] (0xc000af01e0) (3) Data frame sent\nI0701 00:38:38.498137 1918 log.go:172] (0xc0009ad970) Data frame received for 3\nI0701 00:38:38.498151 1918 log.go:172] (0xc000af01e0) (3) Data frame handling\nI0701 00:38:38.500190 1918 log.go:172] (0xc0009ad970) Data frame received for 1\nI0701 00:38:38.500221 1918 log.go:172] (0xc000822c80) (1) Data frame handling\nI0701 00:38:38.500234 1918 log.go:172] (0xc000822c80) (1) Data frame sent\nI0701 00:38:38.500248 1918 log.go:172] (0xc0009ad970) (0xc000822c80) Stream removed, broadcasting: 1\nI0701 00:38:38.500270 1918 log.go:172] (0xc0009ad970) Go away received\nI0701 00:38:38.500952 1918 log.go:172] (0xc0009ad970) (0xc000822c80) Stream removed, broadcasting: 1\nI0701 00:38:38.500976 1918 log.go:172] (0xc0009ad970) (0xc000af01e0) Stream removed, broadcasting: 3\nI0701 00:38:38.500989 1918 log.go:172] (0xc0009ad970) (0xc0006e8a00) Stream removed, broadcasting: 5\n" Jul 1 00:38:38.510: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 00:38:38.510: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 00:38:38.510: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 00:38:38.513: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 1 00:38:48.522: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 00:38:48.522: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 1 00:38:48.522: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 1 00:38:48.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999793s Jul 1 00:38:49.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989192155s Jul 1 00:38:50.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98361521s Jul 1 00:38:51.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978496158s Jul 1 00:38:52.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.972162979s Jul 1 00:38:53.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.966093138s Jul 1 00:38:54.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.961590608s Jul 1 00:38:55.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.956441931s Jul 1 00:38:56.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.951513144s Jul 1 00:38:57.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.157847ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-560 Jul 1 00:38:58.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 00:38:58.839: INFO: stderr: "I0701 00:38:58.728010 1938 log.go:172] (0xc0009e54a0) (0xc00086dcc0) Create stream\nI0701 00:38:58.728083 1938 log.go:172] (0xc0009e54a0) (0xc00086dcc0) Stream added, broadcasting: 1\nI0701 00:38:58.734066 1938 log.go:172] (0xc0009e54a0) Reply frame received for 1\nI0701 00:38:58.734103 1938 log.go:172] (0xc0009e54a0) (0xc0008605a0) Create stream\nI0701 00:38:58.734112 1938 log.go:172] (0xc0009e54a0) (0xc0008605a0) Stream added, broadcasting: 3\nI0701 00:38:58.735207 1938 log.go:172] (0xc0009e54a0) Reply frame received for 3\nI0701 00:38:58.735247 1938 log.go:172] (0xc0009e54a0) (0xc00085a8c0) Create stream\nI0701 00:38:58.735260 1938 log.go:172] (0xc0009e54a0) (0xc00085a8c0) Stream added, broadcasting: 5\nI0701 00:38:58.736484 1938 log.go:172] (0xc0009e54a0) Reply frame received for 5\nI0701 00:38:58.828893 1938 log.go:172] (0xc0009e54a0) Data frame received for 5\nI0701 00:38:58.828958 1938 log.go:172] (0xc00085a8c0) (5) Data frame handling\nI0701 00:38:58.828983 1938 log.go:172] (0xc00085a8c0) (5) Data frame sent\nI0701 00:38:58.829011 1938 log.go:172] (0xc0009e54a0) Data frame received for 5\nI0701 00:38:58.829032 1938 log.go:172] (0xc00085a8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 00:38:58.829069 1938 log.go:172] (0xc0009e54a0) Data frame received for 3\nI0701 00:38:58.829102 1938 log.go:172] (0xc0008605a0) (3) Data frame handling\nI0701 00:38:58.829332 1938 log.go:172] (0xc0008605a0) (3) Data frame sent\nI0701 00:38:58.829355 1938 log.go:172] (0xc0009e54a0) Data frame received for 3\nI0701 00:38:58.829375 1938 log.go:172] (0xc0008605a0) (3) Data frame handling\nI0701 00:38:58.830794 1938 log.go:172] (0xc0009e54a0) Data frame received for 1\nI0701 00:38:58.830829 1938 log.go:172] (0xc00086dcc0) (1) Data frame handling\nI0701 00:38:58.830864 1938 log.go:172] (0xc00086dcc0) (1) Data frame sent\nI0701 00:38:58.830899 1938 log.go:172] (0xc0009e54a0) (0xc00086dcc0) Stream removed, broadcasting: 1\nI0701 00:38:58.830942 1938 log.go:172] (0xc0009e54a0) Go away received\nI0701 00:38:58.831545 1938 log.go:172] (0xc0009e54a0) (0xc00086dcc0) Stream removed, broadcasting: 1\nI0701 00:38:58.831591 1938 log.go:172] (0xc0009e54a0) (0xc0008605a0) Stream removed, broadcasting: 3\nI0701 00:38:58.831612 1938 log.go:172] (0xc0009e54a0) (0xc00085a8c0) Stream removed, broadcasting: 5\n" Jul 1 00:38:58.839: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 00:38:58.839: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 00:38:58.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 00:38:59.114: INFO: stderr: "I0701 00:38:59.022684 1959 log.go:172] (0xc00097c9a0) (0xc00059afa0) Create stream\nI0701 00:38:59.022747 1959 log.go:172] (0xc00097c9a0) (0xc00059afa0) Stream added, broadcasting: 1\nI0701 00:38:59.025448 1959 log.go:172] (0xc00097c9a0) Reply frame received for 1\nI0701 00:38:59.025494 1959 log.go:172] (0xc00097c9a0) (0xc000508460) Create stream\nI0701 00:38:59.025505 1959 log.go:172] (0xc00097c9a0) (0xc000508460) Stream added, broadcasting: 3\nI0701 00:38:59.026421 1959 log.go:172] (0xc00097c9a0) Reply frame received for 3\nI0701 00:38:59.026454 1959 log.go:172] (0xc00097c9a0) (0xc00059b720) Create stream\nI0701 00:38:59.026465 1959 log.go:172] (0xc00097c9a0) (0xc00059b720) Stream added, broadcasting: 5\nI0701 00:38:59.027364 1959 log.go:172] (0xc00097c9a0) Reply frame received for 5\nI0701 00:38:59.106903 1959 log.go:172] (0xc00097c9a0) Data frame received for 5\nI0701 00:38:59.106961 1959 log.go:172] (0xc00059b720) (5) Data frame handling\nI0701 00:38:59.106980 1959 log.go:172] (0xc00059b720) (5) Data frame sent\nI0701 00:38:59.106993 1959 log.go:172] (0xc00097c9a0) Data frame received for 5\nI0701 00:38:59.107007 1959 log.go:172] (0xc00059b720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 00:38:59.107059 1959 log.go:172] (0xc00097c9a0) Data frame received for 3\nI0701 00:38:59.107114 1959 log.go:172] (0xc000508460) (3) Data frame handling\nI0701 00:38:59.107144 1959 log.go:172] (0xc000508460) (3) Data frame sent\nI0701 00:38:59.107167 1959 log.go:172] (0xc00097c9a0) Data frame received for 3\nI0701 00:38:59.107182 1959 log.go:172] (0xc000508460) (3) Data frame handling\nI0701 00:38:59.108402 1959 log.go:172] (0xc00097c9a0) Data frame received for 1\nI0701 00:38:59.108427 1959 log.go:172] (0xc00059afa0) (1) Data frame handling\nI0701 00:38:59.108462 1959 log.go:172] (0xc00059afa0) (1) Data frame sent\nI0701 00:38:59.108494 1959 log.go:172] (0xc00097c9a0) (0xc00059afa0) Stream removed, broadcasting: 1\nI0701 00:38:59.108577 1959 log.go:172] (0xc00097c9a0) Go away received\nI0701 00:38:59.108811 1959 log.go:172] (0xc00097c9a0) (0xc00059afa0) Stream removed, broadcasting: 1\nI0701 00:38:59.108828 1959 log.go:172] (0xc00097c9a0) (0xc000508460) Stream removed, broadcasting: 3\nI0701 00:38:59.108838 1959 log.go:172] (0xc00097c9a0) (0xc00059b720) Stream removed, broadcasting: 5\n" Jul 1 00:38:59.115: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 00:38:59.115: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 00:38:59.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-560 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 00:38:59.326: INFO: stderr: "I0701 00:38:59.247650 1979 log.go:172] (0xc000aecfd0) (0xc000a928c0) Create stream\nI0701 00:38:59.247704 1979 log.go:172] (0xc000aecfd0) (0xc000a928c0) Stream added, broadcasting: 1\nI0701 00:38:59.252598 1979 log.go:172] (0xc000aecfd0) Reply frame received for 1\nI0701 00:38:59.252662 1979 log.go:172] (0xc000aecfd0) (0xc0005668c0) Create stream\nI0701 00:38:59.252680 1979 log.go:172] (0xc000aecfd0) (0xc0005668c0) Stream added, broadcasting: 3\nI0701 00:38:59.253858 1979 log.go:172] (0xc000aecfd0) Reply frame received for 3\nI0701 00:38:59.253887 1979 log.go:172] (0xc000aecfd0) (0xc000691680) Create stream\nI0701 00:38:59.253896 1979 log.go:172] (0xc000aecfd0) (0xc000691680) Stream added, broadcasting: 5\nI0701 00:38:59.254850 1979 log.go:172] (0xc000aecfd0) Reply frame received for 5\nI0701 00:38:59.311648 1979 log.go:172] (0xc000aecfd0) Data frame received for 5\nI0701 00:38:59.311670 1979 log.go:172] (0xc000691680) (5) Data frame handling\nI0701 00:38:59.311679 1979 log.go:172] (0xc000691680) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 00:38:59.317579 1979 log.go:172] (0xc000aecfd0) Data frame received for 3\nI0701 00:38:59.317622 1979 log.go:172] (0xc0005668c0) (3) Data frame handling\nI0701 00:38:59.317661 1979 log.go:172] (0xc0005668c0) (3) Data frame sent\nI0701 00:38:59.317679 1979 log.go:172] (0xc000aecfd0) Data frame received for 3\nI0701 00:38:59.317692 1979 log.go:172] (0xc0005668c0) (3) Data frame handling\nI0701 00:38:59.318001 1979 log.go:172] (0xc000aecfd0) Data frame received for 5\nI0701 00:38:59.318024 1979 log.go:172] (0xc000691680) (5) Data frame handling\nI0701 00:38:59.319553 1979 log.go:172] (0xc000aecfd0) Data frame received for 1\nI0701 00:38:59.319573 1979 log.go:172] (0xc000a928c0) (1) Data frame handling\nI0701 00:38:59.319588 1979 log.go:172] (0xc000a928c0) (1) Data frame sent\nI0701 00:38:59.319608 1979 log.go:172] (0xc000aecfd0) (0xc000a928c0) Stream removed, broadcasting: 1\nI0701 00:38:59.319633 1979 log.go:172] (0xc000aecfd0) Go away received\nI0701 00:38:59.320003 1979 log.go:172] (0xc000aecfd0) (0xc000a928c0) Stream removed, broadcasting: 1\nI0701 00:38:59.320024 1979 log.go:172] (0xc000aecfd0) (0xc0005668c0) Stream removed, broadcasting: 3\nI0701 00:38:59.320034 1979 log.go:172] (0xc000aecfd0) (0xc000691680) Stream removed, broadcasting: 5\n" Jul 1 00:38:59.326: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 00:38:59.326: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 00:38:59.326: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 1 00:39:19.485: INFO: Deleting all statefulset in ns statefulset-560 Jul 1 00:39:19.489: INFO: Scaling statefulset ss to 0 Jul 1 00:39:19.499: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 00:39:19.502: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:39:19.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-560" for this suite. • [SLOW TEST:82.638 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":294,"completed":181,"skipped":3073,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:39:19.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4602 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 1 00:39:19.690: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 1 00:39:19.744: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 1 00:39:21.758: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 1 00:39:23.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:39:25.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:39:27.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:39:29.757: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:39:31.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:39:33.748: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 1 00:39:35.748: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 1 00:39:35.754: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 1 00:39:39.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.174:8080/dial?request=hostname&protocol=udp&host=10.244.1.173&port=8081&tries=1'] Namespace:pod-network-test-4602 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:39:39.780: INFO: >>> kubeConfig: /root/.kube/config I0701 00:39:39.817997 8 log.go:172] (0xc001eb5760) (0xc0019c1540) Create stream I0701 00:39:39.818033 8 log.go:172] (0xc001eb5760) (0xc0019c1540) Stream added, broadcasting: 1 I0701 00:39:39.820289 8 log.go:172] (0xc001eb5760) Reply frame received for 1 I0701 00:39:39.820335 8 log.go:172] (0xc001eb5760) (0xc0011f5180) Create stream I0701 00:39:39.820348 8 log.go:172] (0xc001eb5760) (0xc0011f5180) Stream added, broadcasting: 3 I0701 00:39:39.821631 8 log.go:172] (0xc001eb5760) Reply frame received for 3 I0701 00:39:39.821666 8 log.go:172] (0xc001eb5760) (0xc0014c4780) Create stream I0701 00:39:39.821680 8 log.go:172] (0xc001eb5760) (0xc0014c4780) Stream added, broadcasting: 5 I0701 00:39:39.822840 8 log.go:172] (0xc001eb5760) Reply frame received for 5 I0701 00:39:39.930842 8 log.go:172] (0xc001eb5760) Data frame received for 3 I0701 00:39:39.930866 8 log.go:172] (0xc0011f5180) (3) Data frame handling I0701 00:39:39.930888 8 log.go:172] (0xc0011f5180) (3) Data frame sent I0701 00:39:39.931361 8 log.go:172] (0xc001eb5760) Data frame received for 3 I0701 00:39:39.931399 8 log.go:172] (0xc0011f5180) (3) Data frame handling I0701 00:39:39.931424 8 log.go:172] (0xc001eb5760) Data frame received for 5 I0701 00:39:39.931435 8 log.go:172] (0xc0014c4780) (5) Data frame handling I0701 00:39:39.933677 8 log.go:172] (0xc001eb5760) Data frame received for 1 I0701 00:39:39.933763 8 log.go:172] (0xc0019c1540) (1) Data frame handling I0701 00:39:39.933810 8 log.go:172] (0xc0019c1540) (1) Data frame sent I0701 00:39:39.933838 8 log.go:172] (0xc001eb5760) (0xc0019c1540) Stream removed, broadcasting: 1 I0701 00:39:39.933861 8 log.go:172] (0xc001eb5760) Go away received I0701 00:39:39.934027 8 log.go:172] (0xc001eb5760) (0xc0019c1540) Stream removed, broadcasting: 1 I0701 00:39:39.934075 8 log.go:172] (0xc001eb5760) (0xc0011f5180) Stream removed, broadcasting: 3 I0701 00:39:39.934095 8 log.go:172] (0xc001eb5760) (0xc0014c4780) Stream removed, broadcasting: 5 Jul 1 00:39:39.934: INFO: Waiting for responses: map[] Jul 1 00:39:39.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.174:8080/dial?request=hostname&protocol=udp&host=10.244.2.211&port=8081&tries=1'] Namespace:pod-network-test-4602 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:39:39.938: INFO: >>> kubeConfig: /root/.kube/config I0701 00:39:39.967277 8 log.go:172] (0xc00182a370) (0xc0014c4e60) Create stream I0701 00:39:39.967304 8 log.go:172] (0xc00182a370) (0xc0014c4e60) Stream added, broadcasting: 1 I0701 00:39:39.969437 8 log.go:172] (0xc00182a370) Reply frame received for 1 I0701 00:39:39.969485 8 log.go:172] (0xc00182a370) (0xc0014c4f00) Create stream I0701 00:39:39.969499 8 log.go:172] (0xc00182a370) (0xc0014c4f00) Stream added, broadcasting: 3 I0701 00:39:39.970378 8 log.go:172] (0xc00182a370) Reply frame received for 3 I0701 00:39:39.970406 8 log.go:172] (0xc00182a370) (0xc0014c5180) Create stream I0701 00:39:39.970416 8 log.go:172] (0xc00182a370) (0xc0014c5180) Stream added, broadcasting: 5 I0701 00:39:39.971137 8 log.go:172] (0xc00182a370) Reply frame received for 5 I0701 00:39:40.032685 8 log.go:172] (0xc00182a370) Data frame received for 3 I0701 00:39:40.032727 8 log.go:172] (0xc0014c4f00) (3) Data frame handling I0701 00:39:40.032782 8 log.go:172] (0xc0014c4f00) (3) Data frame sent I0701 00:39:40.033258 8 log.go:172] (0xc00182a370) Data frame received for 3 I0701 00:39:40.033279 8 log.go:172] (0xc0014c4f00) (3) Data frame handling I0701 00:39:40.033310 8 log.go:172] (0xc00182a370) Data frame received for 5 I0701 00:39:40.033332 8 log.go:172] (0xc0014c5180) (5) Data frame handling I0701 00:39:40.035277 8 log.go:172] (0xc00182a370) Data frame received for 1 I0701 00:39:40.035303 8 log.go:172] (0xc0014c4e60) (1) Data frame handling I0701 00:39:40.035314 8 log.go:172] (0xc0014c4e60) (1) Data frame sent I0701 00:39:40.035333 8 log.go:172] (0xc00182a370) (0xc0014c4e60) Stream removed, broadcasting: 1 I0701 00:39:40.035381 8 log.go:172] (0xc00182a370) Go away received I0701 00:39:40.035430 8 log.go:172] (0xc00182a370) (0xc0014c4e60) Stream removed, broadcasting: 1 I0701 00:39:40.035450 8 log.go:172] (0xc00182a370) (0xc0014c4f00) Stream removed, broadcasting: 3 I0701 00:39:40.035471 8 log.go:172] (0xc00182a370) (0xc0014c5180) Stream removed, broadcasting: 5 Jul 1 00:39:40.035: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:39:40.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4602" for this suite. • [SLOW TEST:20.481 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":294,"completed":182,"skipped":3080,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:39:40.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:39:40.166: INFO: Waiting up to 5m0s for pod "busybox-user-65534-2d4b1ee9-53f6-4a59-8fd3-384d9689734b" in namespace "security-context-test-523" to be "Succeeded or Failed" Jul 1 00:39:40.187: INFO: Pod "busybox-user-65534-2d4b1ee9-53f6-4a59-8fd3-384d9689734b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.846249ms Jul 1 00:39:42.249: INFO: Pod "busybox-user-65534-2d4b1ee9-53f6-4a59-8fd3-384d9689734b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083328627s Jul 1 00:39:44.266: INFO: Pod "busybox-user-65534-2d4b1ee9-53f6-4a59-8fd3-384d9689734b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099713788s Jul 1 00:39:44.266: INFO: Pod "busybox-user-65534-2d4b1ee9-53f6-4a59-8fd3-384d9689734b" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:39:44.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-523" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":183,"skipped":3102,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:39:44.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Jul 1 00:39:50.398: INFO: Pod pod-hostip-8c08f86d-9113-4fd3-b198-c0e9539a93c4 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:39:50.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8011" for this suite. • [SLOW TEST:6.126 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":294,"completed":184,"skipped":3133,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:39:50.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:39:54.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4336" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":294,"completed":185,"skipped":3137,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:39:54.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-m58w STEP: Creating a pod to test atomic-volume-subpath Jul 1 00:39:54.701: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m58w" in namespace "subpath-5265" to be "Succeeded or Failed" Jul 1 00:39:54.705: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935756ms Jul 1 00:39:56.776: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074370985s Jul 1 00:39:58.780: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 4.078683111s Jul 1 00:40:00.783: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 6.081795286s Jul 1 00:40:02.787: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 8.085521751s Jul 1 00:40:04.792: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 10.090175302s Jul 1 00:40:06.796: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 12.094877299s Jul 1 00:40:08.800: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 14.09888171s Jul 1 00:40:10.805: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 16.103095027s Jul 1 00:40:12.808: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 18.106907109s Jul 1 00:40:14.813: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 20.111968349s Jul 1 00:40:16.820: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 22.118154416s Jul 1 00:40:18.824: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Running", Reason="", readiness=true. Elapsed: 24.122630158s Jul 1 00:40:20.829: INFO: Pod "pod-subpath-test-configmap-m58w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.127678762s STEP: Saw pod success Jul 1 00:40:20.829: INFO: Pod "pod-subpath-test-configmap-m58w" satisfied condition "Succeeded or Failed" Jul 1 00:40:20.833: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-m58w container test-container-subpath-configmap-m58w: STEP: delete the pod Jul 1 00:40:20.872: INFO: Waiting for pod pod-subpath-test-configmap-m58w to disappear Jul 1 00:40:20.914: INFO: Pod pod-subpath-test-configmap-m58w no longer exists STEP: Deleting pod pod-subpath-test-configmap-m58w Jul 1 00:40:20.914: INFO: Deleting pod "pod-subpath-test-configmap-m58w" in namespace "subpath-5265" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:40:20.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5265" for this suite. • [SLOW TEST:26.337 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":294,"completed":186,"skipped":3156,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:40:20.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-0c5b76b8-8b48-4149-8c72-ab68b0725a28 STEP: Creating a pod to test consume configMaps Jul 1 00:40:21.003: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b" in namespace "projected-7846" to be "Succeeded or Failed" Jul 1 00:40:21.076: INFO: Pod "pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b": Phase="Pending", Reason="", readiness=false. Elapsed: 72.743682ms Jul 1 00:40:23.080: INFO: Pod "pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076962157s Jul 1 00:40:25.085: INFO: Pod "pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082255853s STEP: Saw pod success Jul 1 00:40:25.085: INFO: Pod "pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b" satisfied condition "Succeeded or Failed" Jul 1 00:40:25.088: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b container projected-configmap-volume-test: STEP: delete the pod Jul 1 00:40:25.121: INFO: Waiting for pod pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b to disappear Jul 1 00:40:25.166: INFO: Pod pod-projected-configmaps-600b3a4a-8da7-4b52-aaba-bb9750fa655b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:40:25.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7846" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":294,"completed":187,"skipped":3183,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:40:25.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0701 00:41:06.801323 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:41:06.801: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jul 1 00:41:06.801: INFO: Deleting pod "simpletest.rc-9m7vg" in namespace "gc-8884" Jul 1 00:41:06.839: INFO: Deleting pod "simpletest.rc-dz2c8" in namespace "gc-8884" Jul 1 00:41:06.921: INFO: Deleting pod "simpletest.rc-gq66p" in namespace "gc-8884" Jul 1 00:41:06.958: INFO: Deleting pod "simpletest.rc-kz2m5" in namespace "gc-8884" Jul 1 00:41:07.347: INFO: Deleting pod "simpletest.rc-m9vmg" in namespace "gc-8884" Jul 1 00:41:07.437: INFO: Deleting pod "simpletest.rc-nqqps" in namespace "gc-8884" Jul 1 00:41:07.533: INFO: Deleting pod "simpletest.rc-sr79d" in namespace "gc-8884" Jul 1 00:41:07.581: INFO: Deleting pod "simpletest.rc-w4psr" in namespace "gc-8884" Jul 1 00:41:07.904: INFO: Deleting pod "simpletest.rc-wclfb" in namespace "gc-8884" Jul 1 00:41:08.255: INFO: Deleting pod "simpletest.rc-wsvpc" in namespace "gc-8884" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:41:08.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8884" for this suite. • [SLOW TEST:43.403 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":294,"completed":188,"skipped":3184,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:41:08.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-331bcc96-5915-47ad-8621-f72dc6fd2bd2 STEP: Creating a pod to test consume configMaps Jul 1 00:41:08.955: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c" in namespace "projected-8558" to be "Succeeded or Failed" Jul 1 00:41:09.178: INFO: Pod "pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c": Phase="Pending", Reason="", readiness=false. Elapsed: 223.470561ms Jul 1 00:41:11.298: INFO: Pod "pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343327094s Jul 1 00:41:13.301: INFO: Pod "pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.346387417s STEP: Saw pod success Jul 1 00:41:13.301: INFO: Pod "pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c" satisfied condition "Succeeded or Failed" Jul 1 00:41:13.303: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c container projected-configmap-volume-test: STEP: delete the pod Jul 1 00:41:13.429: INFO: Waiting for pod pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c to disappear Jul 1 00:41:13.480: INFO: Pod pod-projected-configmaps-c4311426-695d-45cf-a89c-fad5be63928c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:41:13.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8558" for this suite. • [SLOW TEST:5.075 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":189,"skipped":3186,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:41:13.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Jul 1 00:41:14.925: INFO: created pod pod-service-account-defaultsa Jul 1 00:41:14.925: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 1 00:41:14.930: INFO: created pod pod-service-account-mountsa Jul 1 00:41:14.930: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 1 00:41:14.950: INFO: created pod pod-service-account-nomountsa Jul 1 00:41:14.950: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 1 00:41:15.010: INFO: created pod pod-service-account-defaultsa-mountspec Jul 1 00:41:15.010: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 1 00:41:15.065: INFO: created pod pod-service-account-mountsa-mountspec Jul 1 00:41:15.065: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 1 00:41:15.090: INFO: created pod pod-service-account-nomountsa-mountspec Jul 1 00:41:15.090: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 1 00:41:15.166: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 1 00:41:15.166: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 1 00:41:15.191: INFO: created pod pod-service-account-mountsa-nomountspec Jul 1 00:41:15.191: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 1 00:41:15.255: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 1 00:41:15.255: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:41:15.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4878" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":294,"completed":190,"skipped":3189,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:41:15.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:41:23.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7526" for this suite. • [SLOW TEST:8.259 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":294,"completed":191,"skipped":3211,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:41:23.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d0434256-84f4-4858-9b1f-59748d0cf05e STEP: Creating a pod to test consume configMaps Jul 1 00:41:25.388: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1" in namespace "projected-2158" to be "Succeeded or Failed" Jul 1 00:41:25.458: INFO: Pod "pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 70.398578ms Jul 1 00:41:27.537: INFO: Pod "pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149651627s Jul 1 00:41:30.032: INFO: Pod "pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644255689s Jul 1 00:41:32.382: INFO: Pod "pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.993880063s Jul 1 00:41:34.471: INFO: Pod "pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.083774385s STEP: Saw pod success Jul 1 00:41:34.472: INFO: Pod "pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1" satisfied condition "Succeeded or Failed" Jul 1 00:41:34.474: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1 container projected-configmap-volume-test: STEP: delete the pod Jul 1 00:41:34.616: INFO: Waiting for pod pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1 to disappear Jul 1 00:41:34.647: INFO: Pod pod-projected-configmaps-8f9521f9-3547-47b3-a34c-3027b91f3bf1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:41:34.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2158" for this suite. • [SLOW TEST:10.850 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":192,"skipped":3227,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:41:34.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:41:34.763: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 1 00:41:37.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5664 create -f -' Jul 1 00:41:41.120: INFO: stderr: "" Jul 1 00:41:41.120: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 1 00:41:41.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5664 delete e2e-test-crd-publish-openapi-9732-crds test-cr' Jul 1 00:41:41.250: INFO: stderr: "" Jul 1 00:41:41.251: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jul 1 00:41:41.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5664 apply -f -' Jul 1 00:41:42.565: INFO: stderr: "" Jul 1 00:41:42.565: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 1 00:41:42.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5664 delete e2e-test-crd-publish-openapi-9732-crds test-cr' Jul 1 00:41:42.690: INFO: stderr: "" Jul 1 00:41:42.690: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jul 1 00:41:42.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9732-crds' Jul 1 00:41:43.479: INFO: stderr: "" Jul 1 00:41:43.480: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9732-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:41:46.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5664" for this suite. • [SLOW TEST:11.718 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":294,"completed":193,"skipped":3230,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:41:46.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Jul 1 00:41:46.485: INFO: Waiting up to 5m0s for pod "client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed" in namespace "containers-1334" to be "Succeeded or Failed" Jul 1 00:41:46.493: INFO: Pod "client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.643383ms Jul 1 00:41:48.542: INFO: Pod "client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056537772s Jul 1 00:41:50.546: INFO: Pod "client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed": Phase="Running", Reason="", readiness=true. Elapsed: 4.06101889s Jul 1 00:41:52.551: INFO: Pod "client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065663581s STEP: Saw pod success Jul 1 00:41:52.551: INFO: Pod "client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed" satisfied condition "Succeeded or Failed" Jul 1 00:41:52.555: INFO: Trying to get logs from node latest-worker pod client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed container test-container: STEP: delete the pod Jul 1 00:41:52.610: INFO: Waiting for pod client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed to disappear Jul 1 00:41:52.626: INFO: Pod client-containers-8c812c37-530f-4dda-a774-67de7e0f6aed no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:41:52.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1334" for this suite. • [SLOW TEST:6.242 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":294,"completed":194,"skipped":3257,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:41:52.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 1 00:41:52.689: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 00:41:52.701: INFO: Waiting for terminating namespaces to be deleted... Jul 1 00:41:52.703: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 1 00:41:52.708: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.708: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jul 1 00:41:52.708: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.708: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jul 1 00:41:52.708: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.708: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:41:52.708: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.708: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:41:52.708: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 1 00:41:52.713: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.713: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jul 1 00:41:52.713: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.713: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jul 1 00:41:52.713: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.713: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:41:52.713: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:41:52.713: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e0487a8f-cd86-498c-8520-f4129f470591 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e0487a8f-cd86-498c-8520-f4129f470591 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e0487a8f-cd86-498c-8520-f4129f470591 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:42:00.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4411" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.278 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":294,"completed":195,"skipped":3299,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:42:00.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 00:42:05.058: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:42:05.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8046" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":294,"completed":196,"skipped":3301,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:42:05.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Jul 1 00:42:05.180: INFO: Waiting up to 5m0s for pod "var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9" in namespace "var-expansion-9825" to be "Succeeded or Failed" Jul 1 00:42:05.200: INFO: Pod "var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.411277ms Jul 1 00:42:07.268: INFO: Pod "var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088407207s Jul 1 00:42:09.272: INFO: Pod "var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092201279s STEP: Saw pod success Jul 1 00:42:09.272: INFO: Pod "var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9" satisfied condition "Succeeded or Failed" Jul 1 00:42:09.275: INFO: Trying to get logs from node latest-worker pod var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9 container dapi-container: STEP: delete the pod Jul 1 00:42:09.291: INFO: Waiting for pod var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9 to disappear Jul 1 00:42:09.296: INFO: Pod var-expansion-d24b167e-bebc-42e2-8fc8-c3496c331fc9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:42:09.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9825" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":294,"completed":197,"skipped":3316,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:42:09.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:42:09.714: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9308 I0701 00:42:09.775396 8 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9308, replica count: 1 I0701 00:42:10.825839 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:42:11.826075 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:42:12.826352 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:42:13.826601 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:42:13.967: INFO: Created: latency-svc-zkmxs Jul 1 00:42:13.983: INFO: Got endpoints: latency-svc-zkmxs [56.711377ms] Jul 1 00:42:14.074: INFO: Created: latency-svc-mflgv Jul 1 00:42:14.083: INFO: Got endpoints: latency-svc-mflgv [99.874335ms] Jul 1 00:42:14.123: INFO: Created: latency-svc-fp45k Jul 1 00:42:14.137: INFO: Got endpoints: latency-svc-fp45k [154.101264ms] Jul 1 00:42:14.185: INFO: Created: latency-svc-pxwnq Jul 1 00:42:14.191: INFO: Got endpoints: latency-svc-pxwnq [208.299539ms] Jul 1 00:42:14.248: INFO: Created: latency-svc-qgd88 Jul 1 00:42:14.332: INFO: Got endpoints: latency-svc-qgd88 [348.558954ms] Jul 1 00:42:14.351: INFO: Created: latency-svc-8mh5s Jul 1 00:42:14.366: INFO: Got endpoints: latency-svc-8mh5s [382.857748ms] Jul 1 00:42:14.417: INFO: Created: latency-svc-st768 Jul 1 00:42:14.466: INFO: Got endpoints: latency-svc-st768 [482.558989ms] Jul 1 00:42:14.512: INFO: Created: latency-svc-qv6hh Jul 1 00:42:14.523: INFO: Got endpoints: latency-svc-qv6hh [539.391408ms] Jul 1 00:42:14.555: INFO: Created: latency-svc-c58vh Jul 1 00:42:14.565: INFO: Got endpoints: latency-svc-c58vh [581.417799ms] Jul 1 00:42:14.675: INFO: Created: latency-svc-jtx59 Jul 1 00:42:14.727: INFO: Got endpoints: latency-svc-jtx59 [743.909106ms] Jul 1 00:42:14.789: INFO: Created: latency-svc-2fg2x Jul 1 00:42:14.818: INFO: Got endpoints: latency-svc-2fg2x [834.522688ms] Jul 1 00:42:14.927: INFO: Created: latency-svc-kkt8x Jul 1 00:42:14.951: INFO: Got endpoints: latency-svc-kkt8x [967.746051ms] Jul 1 00:42:15.064: INFO: Created: latency-svc-f9vc5 Jul 1 00:42:15.118: INFO: Got endpoints: latency-svc-f9vc5 [1.134597976s] Jul 1 00:42:15.209: INFO: Created: latency-svc-lbhfc Jul 1 00:42:15.227: INFO: Got endpoints: latency-svc-lbhfc [1.243470233s] Jul 1 00:42:15.304: INFO: Created: latency-svc-9q4ml Jul 1 00:42:15.354: INFO: Got endpoints: latency-svc-9q4ml [1.370784865s] Jul 1 00:42:15.445: INFO: Created: latency-svc-9s6rc Jul 1 00:42:15.520: INFO: Got endpoints: latency-svc-9s6rc [1.536840644s] Jul 1 00:42:15.588: INFO: Created: latency-svc-h4v8g Jul 1 00:42:15.669: INFO: Got endpoints: latency-svc-h4v8g [1.58636923s] Jul 1 00:42:15.725: INFO: Created: latency-svc-59vgs Jul 1 00:42:15.748: INFO: Got endpoints: latency-svc-59vgs [1.611103664s] Jul 1 00:42:15.887: INFO: Created: latency-svc-ql5jc Jul 1 00:42:16.029: INFO: Got endpoints: latency-svc-ql5jc [1.837368573s] Jul 1 00:42:16.091: INFO: Created: latency-svc-7jqkq Jul 1 00:42:16.154: INFO: Got endpoints: latency-svc-7jqkq [1.822438051s] Jul 1 00:42:16.224: INFO: Created: latency-svc-p68cf Jul 1 00:42:16.379: INFO: Got endpoints: latency-svc-p68cf [2.012327268s] Jul 1 00:42:16.489: INFO: Created: latency-svc-xcng6 Jul 1 00:42:16.517: INFO: Got endpoints: latency-svc-xcng6 [2.050807494s] Jul 1 00:42:16.694: INFO: Created: latency-svc-9jwk6 Jul 1 00:42:16.722: INFO: Got endpoints: latency-svc-9jwk6 [2.198708296s] Jul 1 00:42:16.850: INFO: Created: latency-svc-xv2n7 Jul 1 00:42:16.854: INFO: Got endpoints: latency-svc-xv2n7 [2.288807386s] Jul 1 00:42:16.889: INFO: Created: latency-svc-6gl6l Jul 1 00:42:16.908: INFO: Got endpoints: latency-svc-6gl6l [2.180638764s] Jul 1 00:42:17.018: INFO: Created: latency-svc-5gclm Jul 1 00:42:17.072: INFO: Got endpoints: latency-svc-5gclm [2.25368262s] Jul 1 00:42:17.208: INFO: Created: latency-svc-fkfj8 Jul 1 00:42:17.268: INFO: Got endpoints: latency-svc-fkfj8 [2.317140324s] Jul 1 00:42:17.424: INFO: Created: latency-svc-ssjs6 Jul 1 00:42:17.483: INFO: Got endpoints: latency-svc-ssjs6 [2.365176667s] Jul 1 00:42:17.573: INFO: Created: latency-svc-x2ssl Jul 1 00:42:17.593: INFO: Got endpoints: latency-svc-x2ssl [2.365600068s] Jul 1 00:42:17.747: INFO: Created: latency-svc-ndv2l Jul 1 00:42:17.753: INFO: Got endpoints: latency-svc-ndv2l [2.398986168s] Jul 1 00:42:17.796: INFO: Created: latency-svc-xw5lw Jul 1 00:42:17.911: INFO: Got endpoints: latency-svc-xw5lw [2.390618891s] Jul 1 00:42:17.971: INFO: Created: latency-svc-2967s Jul 1 00:42:17.996: INFO: Got endpoints: latency-svc-2967s [2.326273352s] Jul 1 00:42:18.089: INFO: Created: latency-svc-92hqh Jul 1 00:42:18.097: INFO: Got endpoints: latency-svc-92hqh [2.348980099s] Jul 1 00:42:18.227: INFO: Created: latency-svc-6pm6n Jul 1 00:42:18.248: INFO: Got endpoints: latency-svc-6pm6n [2.219094946s] Jul 1 00:42:18.283: INFO: Created: latency-svc-gpmv9 Jul 1 00:42:18.297: INFO: Got endpoints: latency-svc-gpmv9 [2.142008134s] Jul 1 00:42:18.390: INFO: Created: latency-svc-jdcrm Jul 1 00:42:18.405: INFO: Got endpoints: latency-svc-jdcrm [2.025638836s] Jul 1 00:42:18.450: INFO: Created: latency-svc-42lgt Jul 1 00:42:18.465: INFO: Got endpoints: latency-svc-42lgt [1.948162175s] Jul 1 00:42:18.546: INFO: Created: latency-svc-9hvvp Jul 1 00:42:18.562: INFO: Got endpoints: latency-svc-9hvvp [1.840319985s] Jul 1 00:42:18.600: INFO: Created: latency-svc-nwrlv Jul 1 00:42:18.617: INFO: Got endpoints: latency-svc-nwrlv [1.762667825s] Jul 1 00:42:18.744: INFO: Created: latency-svc-pp7t9 Jul 1 00:42:18.801: INFO: Got endpoints: latency-svc-pp7t9 [1.893210839s] Jul 1 00:42:18.951: INFO: Created: latency-svc-lwgz5 Jul 1 00:42:18.995: INFO: Got endpoints: latency-svc-lwgz5 [1.923429764s] Jul 1 00:42:19.131: INFO: Created: latency-svc-6p47x Jul 1 00:42:19.162: INFO: Got endpoints: latency-svc-6p47x [1.893988022s] Jul 1 00:42:19.218: INFO: Created: latency-svc-vxzn8 Jul 1 00:42:19.228: INFO: Got endpoints: latency-svc-vxzn8 [1.745079991s] Jul 1 00:42:19.290: INFO: Created: latency-svc-ss4wl Jul 1 00:42:19.307: INFO: Got endpoints: latency-svc-ss4wl [1.714320497s] Jul 1 00:42:19.332: INFO: Created: latency-svc-zswlk Jul 1 00:42:19.349: INFO: Got endpoints: latency-svc-zswlk [1.595571812s] Jul 1 00:42:19.413: INFO: Created: latency-svc-pzs2w Jul 1 00:42:19.416: INFO: Got endpoints: latency-svc-pzs2w [1.504923388s] Jul 1 00:42:19.445: INFO: Created: latency-svc-l6px4 Jul 1 00:42:19.457: INFO: Got endpoints: latency-svc-l6px4 [1.461610926s] Jul 1 00:42:19.490: INFO: Created: latency-svc-mhdm4 Jul 1 00:42:19.592: INFO: Got endpoints: latency-svc-mhdm4 [1.494186283s] Jul 1 00:42:19.593: INFO: Created: latency-svc-zhnj7 Jul 1 00:42:19.602: INFO: Got endpoints: latency-svc-zhnj7 [1.354307423s] Jul 1 00:42:19.625: INFO: Created: latency-svc-knf7m Jul 1 00:42:19.638: INFO: Got endpoints: latency-svc-knf7m [1.341680729s] Jul 1 00:42:19.729: INFO: Created: latency-svc-xmd2m Jul 1 00:42:19.732: INFO: Got endpoints: latency-svc-xmd2m [1.327834453s] Jul 1 00:42:19.764: INFO: Created: latency-svc-gf45m Jul 1 00:42:19.784: INFO: Got endpoints: latency-svc-gf45m [1.318961588s] Jul 1 00:42:19.819: INFO: Created: latency-svc-q4sf4 Jul 1 00:42:19.891: INFO: Got endpoints: latency-svc-q4sf4 [1.329210894s] Jul 1 00:42:19.938: INFO: Created: latency-svc-wcpml Jul 1 00:42:19.973: INFO: Got endpoints: latency-svc-wcpml [1.356237576s] Jul 1 00:42:20.058: INFO: Created: latency-svc-ds2sj Jul 1 00:42:20.084: INFO: Got endpoints: latency-svc-ds2sj [1.283118694s] Jul 1 00:42:20.117: INFO: Created: latency-svc-765dh Jul 1 00:42:20.154: INFO: Got endpoints: latency-svc-765dh [1.159046002s] Jul 1 00:42:20.189: INFO: Created: latency-svc-hmbmk Jul 1 00:42:20.217: INFO: Got endpoints: latency-svc-hmbmk [1.05492935s] Jul 1 00:42:20.237: INFO: Created: latency-svc-6zfnq Jul 1 00:42:20.254: INFO: Got endpoints: latency-svc-6zfnq [1.025678226s] Jul 1 00:42:20.317: INFO: Created: latency-svc-6gcn8 Jul 1 00:42:20.325: INFO: Got endpoints: latency-svc-6gcn8 [1.018452881s] Jul 1 00:42:20.350: INFO: Created: latency-svc-pwk4v Jul 1 00:42:20.381: INFO: Got endpoints: latency-svc-pwk4v [1.031581448s] Jul 1 00:42:20.454: INFO: Created: latency-svc-plnm2 Jul 1 00:42:20.464: INFO: Got endpoints: latency-svc-plnm2 [1.048322237s] Jul 1 00:42:20.483: INFO: Created: latency-svc-pkx7l Jul 1 00:42:20.507: INFO: Got endpoints: latency-svc-pkx7l [1.049256262s] Jul 1 00:42:20.537: INFO: Created: latency-svc-vs277 Jul 1 00:42:20.585: INFO: Got endpoints: latency-svc-vs277 [993.544669ms] Jul 1 00:42:20.615: INFO: Created: latency-svc-mfkxr Jul 1 00:42:20.640: INFO: Got endpoints: latency-svc-mfkxr [1.037068166s] Jul 1 00:42:20.663: INFO: Created: latency-svc-xb9x9 Jul 1 00:42:20.682: INFO: Got endpoints: latency-svc-xb9x9 [1.043647397s] Jul 1 00:42:20.747: INFO: Created: latency-svc-lvqrv Jul 1 00:42:20.784: INFO: Got endpoints: latency-svc-lvqrv [1.051823645s] Jul 1 00:42:20.886: INFO: Created: latency-svc-w85lw Jul 1 00:42:20.889: INFO: Got endpoints: latency-svc-w85lw [1.105156546s] Jul 1 00:42:20.933: INFO: Created: latency-svc-dj4mx Jul 1 00:42:20.953: INFO: Got endpoints: latency-svc-dj4mx [1.061947027s] Jul 1 00:42:20.980: INFO: Created: latency-svc-9wj9h Jul 1 00:42:20.999: INFO: Got endpoints: latency-svc-9wj9h [1.026247271s] Jul 1 00:42:21.046: INFO: Created: latency-svc-qc5m5 Jul 1 00:42:21.056: INFO: Got endpoints: latency-svc-qc5m5 [971.453962ms] Jul 1 00:42:21.083: INFO: Created: latency-svc-q4mzk Jul 1 00:42:21.125: INFO: Got endpoints: latency-svc-q4mzk [970.899175ms] Jul 1 00:42:21.196: INFO: Created: latency-svc-4cr2k Jul 1 00:42:21.207: INFO: Got endpoints: latency-svc-4cr2k [989.669216ms] Jul 1 00:42:21.232: INFO: Created: latency-svc-5hvtv Jul 1 00:42:21.258: INFO: Got endpoints: latency-svc-5hvtv [1.003794083s] Jul 1 00:42:21.286: INFO: Created: latency-svc-jtlck Jul 1 00:42:21.316: INFO: Got endpoints: latency-svc-jtlck [990.667746ms] Jul 1 00:42:21.328: INFO: Created: latency-svc-qvrw7 Jul 1 00:42:21.360: INFO: Got endpoints: latency-svc-qvrw7 [978.927187ms] Jul 1 00:42:21.390: INFO: Created: latency-svc-wttqs Jul 1 00:42:21.400: INFO: Got endpoints: latency-svc-wttqs [935.881077ms] Jul 1 00:42:21.448: INFO: Created: latency-svc-pzcrc Jul 1 00:42:21.472: INFO: Got endpoints: latency-svc-pzcrc [965.262716ms] Jul 1 00:42:21.508: INFO: Created: latency-svc-b4dvg Jul 1 00:42:21.526: INFO: Got endpoints: latency-svc-b4dvg [941.05518ms] Jul 1 00:42:21.580: INFO: Created: latency-svc-sstt9 Jul 1 00:42:21.606: INFO: Got endpoints: latency-svc-sstt9 [966.387817ms] Jul 1 00:42:21.653: INFO: Created: latency-svc-n4lpc Jul 1 00:42:21.678: INFO: Got endpoints: latency-svc-n4lpc [995.954497ms] Jul 1 00:42:21.736: INFO: Created: latency-svc-cb54r Jul 1 00:42:21.740: INFO: Got endpoints: latency-svc-cb54r [955.365217ms] Jul 1 00:42:21.808: INFO: Created: latency-svc-j4vf6 Jul 1 00:42:21.822: INFO: Got endpoints: latency-svc-j4vf6 [932.488757ms] Jul 1 00:42:21.885: INFO: Created: latency-svc-lrx4h Jul 1 00:42:21.935: INFO: Got endpoints: latency-svc-lrx4h [981.720353ms] Jul 1 00:42:21.936: INFO: Created: latency-svc-kg9bb Jul 1 00:42:21.970: INFO: Got endpoints: latency-svc-kg9bb [970.879069ms] Jul 1 00:42:22.065: INFO: Created: latency-svc-7zbtx Jul 1 00:42:22.071: INFO: Got endpoints: latency-svc-7zbtx [1.014519057s] Jul 1 00:42:22.091: INFO: Created: latency-svc-s5gl9 Jul 1 00:42:22.106: INFO: Got endpoints: latency-svc-s5gl9 [980.139732ms] Jul 1 00:42:22.156: INFO: Created: latency-svc-vgbmt Jul 1 00:42:22.226: INFO: Got endpoints: latency-svc-vgbmt [1.019180302s] Jul 1 00:42:22.253: INFO: Created: latency-svc-sglmw Jul 1 00:42:22.268: INFO: Got endpoints: latency-svc-sglmw [1.009872439s] Jul 1 00:42:22.313: INFO: Created: latency-svc-lsfzt Jul 1 00:42:22.352: INFO: Got endpoints: latency-svc-lsfzt [1.036091118s] Jul 1 00:42:22.367: INFO: Created: latency-svc-xndmx Jul 1 00:42:22.390: INFO: Got endpoints: latency-svc-xndmx [1.030187607s] Jul 1 00:42:22.420: INFO: Created: latency-svc-45hcm Jul 1 00:42:22.437: INFO: Got endpoints: latency-svc-45hcm [1.037019779s] Jul 1 00:42:22.484: INFO: Created: latency-svc-g5whq Jul 1 00:42:22.499: INFO: Got endpoints: latency-svc-g5whq [1.026670948s] Jul 1 00:42:22.541: INFO: Created: latency-svc-8twbw Jul 1 00:42:22.564: INFO: Got endpoints: latency-svc-8twbw [1.03740172s] Jul 1 00:42:22.622: INFO: Created: latency-svc-vpjlg Jul 1 00:42:22.660: INFO: Got endpoints: latency-svc-vpjlg [1.053944054s] Jul 1 00:42:22.709: INFO: Created: latency-svc-cbhfz Jul 1 00:42:22.760: INFO: Got endpoints: latency-svc-cbhfz [1.0818896s] Jul 1 00:42:22.781: INFO: Created: latency-svc-csxg4 Jul 1 00:42:22.799: INFO: Got endpoints: latency-svc-csxg4 [1.059352465s] Jul 1 00:42:22.835: INFO: Created: latency-svc-lshrc Jul 1 00:42:22.854: INFO: Got endpoints: latency-svc-lshrc [1.031534066s] Jul 1 00:42:22.903: INFO: Created: latency-svc-lnh2w Jul 1 00:42:22.930: INFO: Created: latency-svc-mkhnh Jul 1 00:42:22.930: INFO: Got endpoints: latency-svc-lnh2w [995.357024ms] Jul 1 00:42:22.960: INFO: Got endpoints: latency-svc-mkhnh [989.616568ms] Jul 1 00:42:22.994: INFO: Created: latency-svc-m24rm Jul 1 00:42:23.034: INFO: Got endpoints: latency-svc-m24rm [963.726676ms] Jul 1 00:42:23.057: INFO: Created: latency-svc-ndqrn Jul 1 00:42:23.084: INFO: Got endpoints: latency-svc-ndqrn [978.033483ms] Jul 1 00:42:23.116: INFO: Created: latency-svc-msc4h Jul 1 00:42:23.132: INFO: Got endpoints: latency-svc-msc4h [905.989591ms] Jul 1 00:42:23.178: INFO: Created: latency-svc-mfwjn Jul 1 00:42:23.206: INFO: Got endpoints: latency-svc-mfwjn [938.157284ms] Jul 1 00:42:23.249: INFO: Created: latency-svc-xlnxq Jul 1 00:42:23.276: INFO: Got endpoints: latency-svc-xlnxq [923.866723ms] Jul 1 00:42:23.334: INFO: Created: latency-svc-c5njw Jul 1 00:42:23.351: INFO: Got endpoints: latency-svc-c5njw [961.006461ms] Jul 1 00:42:23.380: INFO: Created: latency-svc-cxcm9 Jul 1 00:42:23.397: INFO: Got endpoints: latency-svc-cxcm9 [959.458749ms] Jul 1 00:42:23.423: INFO: Created: latency-svc-6vnmr Jul 1 00:42:23.477: INFO: Got endpoints: latency-svc-6vnmr [978.686867ms] Jul 1 00:42:23.495: INFO: Created: latency-svc-n29m2 Jul 1 00:42:23.512: INFO: Got endpoints: latency-svc-n29m2 [948.007721ms] Jul 1 00:42:23.538: INFO: Created: latency-svc-9cr4d Jul 1 00:42:23.554: INFO: Got endpoints: latency-svc-9cr4d [894.033231ms] Jul 1 00:42:23.609: INFO: Created: latency-svc-kz9rp Jul 1 00:42:23.662: INFO: Created: latency-svc-kz6qj Jul 1 00:42:23.662: INFO: Got endpoints: latency-svc-kz9rp [901.828133ms] Jul 1 00:42:23.693: INFO: Got endpoints: latency-svc-kz6qj [893.598967ms] Jul 1 00:42:23.746: INFO: Created: latency-svc-8m47t Jul 1 00:42:23.771: INFO: Created: latency-svc-zbpwb Jul 1 00:42:23.771: INFO: Got endpoints: latency-svc-8m47t [917.56532ms] Jul 1 00:42:23.790: INFO: Got endpoints: latency-svc-zbpwb [859.295784ms] Jul 1 00:42:23.825: INFO: Created: latency-svc-h8klj Jul 1 00:42:23.909: INFO: Got endpoints: latency-svc-h8klj [949.614003ms] Jul 1 00:42:23.932: INFO: Created: latency-svc-bztt7 Jul 1 00:42:23.952: INFO: Got endpoints: latency-svc-bztt7 [917.651338ms] Jul 1 00:42:23.981: INFO: Created: latency-svc-zmh72 Jul 1 00:42:24.007: INFO: Got endpoints: latency-svc-zmh72 [923.000803ms] Jul 1 00:42:24.064: INFO: Created: latency-svc-576g8 Jul 1 00:42:24.085: INFO: Got endpoints: latency-svc-576g8 [952.423573ms] Jul 1 00:42:24.123: INFO: Created: latency-svc-rrhgx Jul 1 00:42:24.190: INFO: Got endpoints: latency-svc-rrhgx [984.38045ms] Jul 1 00:42:24.226: INFO: Created: latency-svc-n5r5m Jul 1 00:42:24.241: INFO: Got endpoints: latency-svc-n5r5m [965.281402ms] Jul 1 00:42:24.275: INFO: Created: latency-svc-kvbfv Jul 1 00:42:24.328: INFO: Got endpoints: latency-svc-kvbfv [977.530404ms] Jul 1 00:42:24.358: INFO: Created: latency-svc-gtmzj Jul 1 00:42:24.376: INFO: Got endpoints: latency-svc-gtmzj [978.877486ms] Jul 1 00:42:24.406: INFO: Created: latency-svc-2mth5 Jul 1 00:42:24.423: INFO: Got endpoints: latency-svc-2mth5 [945.103152ms] Jul 1 00:42:24.491: INFO: Created: latency-svc-7nmvq Jul 1 00:42:24.507: INFO: Got endpoints: latency-svc-7nmvq [995.333229ms] Jul 1 00:42:24.531: INFO: Created: latency-svc-67t6t Jul 1 00:42:24.550: INFO: Got endpoints: latency-svc-67t6t [995.706242ms] Jul 1 00:42:24.603: INFO: Created: latency-svc-vzjpj Jul 1 00:42:24.641: INFO: Got endpoints: latency-svc-vzjpj [978.883438ms] Jul 1 00:42:24.642: INFO: Created: latency-svc-7xcrg Jul 1 00:42:24.652: INFO: Got endpoints: latency-svc-7xcrg [959.497631ms] Jul 1 00:42:24.695: INFO: Created: latency-svc-zm4sj Jul 1 00:42:24.735: INFO: Got endpoints: latency-svc-zm4sj [963.756721ms] Jul 1 00:42:24.772: INFO: Created: latency-svc-qp9ks Jul 1 00:42:24.791: INFO: Got endpoints: latency-svc-qp9ks [1.001774867s] Jul 1 00:42:24.820: INFO: Created: latency-svc-jwn7z Jul 1 00:42:24.869: INFO: Got endpoints: latency-svc-jwn7z [959.119336ms] Jul 1 00:42:24.873: INFO: Created: latency-svc-jhp27 Jul 1 00:42:24.887: INFO: Got endpoints: latency-svc-jhp27 [935.439772ms] Jul 1 00:42:24.916: INFO: Created: latency-svc-8t7vv Jul 1 00:42:24.936: INFO: Got endpoints: latency-svc-8t7vv [929.681167ms] Jul 1 00:42:24.966: INFO: Created: latency-svc-qtfb7 Jul 1 00:42:25.004: INFO: Got endpoints: latency-svc-qtfb7 [919.321711ms] Jul 1 00:42:25.031: INFO: Created: latency-svc-k9p4l Jul 1 00:42:25.046: INFO: Got endpoints: latency-svc-k9p4l [855.235783ms] Jul 1 00:42:25.065: INFO: Created: latency-svc-xphzk Jul 1 00:42:25.075: INFO: Got endpoints: latency-svc-xphzk [833.702096ms] Jul 1 00:42:25.136: INFO: Created: latency-svc-bg6rp Jul 1 00:42:25.148: INFO: Got endpoints: latency-svc-bg6rp [819.161131ms] Jul 1 00:42:25.174: INFO: Created: latency-svc-kzczf Jul 1 00:42:25.190: INFO: Got endpoints: latency-svc-kzczf [814.3802ms] Jul 1 00:42:25.216: INFO: Created: latency-svc-hxh2l Jul 1 00:42:25.233: INFO: Got endpoints: latency-svc-hxh2l [810.108939ms] Jul 1 00:42:25.280: INFO: Created: latency-svc-4vmt5 Jul 1 00:42:25.287: INFO: Got endpoints: latency-svc-4vmt5 [779.412226ms] Jul 1 00:42:25.311: INFO: Created: latency-svc-jpg2x Jul 1 00:42:25.355: INFO: Got endpoints: latency-svc-jpg2x [804.686063ms] Jul 1 00:42:25.430: INFO: Created: latency-svc-cz4hk Jul 1 00:42:25.434: INFO: Got endpoints: latency-svc-cz4hk [792.717624ms] Jul 1 00:42:25.462: INFO: Created: latency-svc-58tvf Jul 1 00:42:25.491: INFO: Got endpoints: latency-svc-58tvf [838.795061ms] Jul 1 00:42:25.521: INFO: Created: latency-svc-jpnb7 Jul 1 00:42:25.567: INFO: Got endpoints: latency-svc-jpnb7 [831.801814ms] Jul 1 00:42:25.575: INFO: Created: latency-svc-rmqqb Jul 1 00:42:25.619: INFO: Got endpoints: latency-svc-rmqqb [827.029487ms] Jul 1 00:42:25.666: INFO: Created: latency-svc-wdh84 Jul 1 00:42:25.711: INFO: Got endpoints: latency-svc-wdh84 [842.526467ms] Jul 1 00:42:25.732: INFO: Created: latency-svc-b8ngp Jul 1 00:42:25.761: INFO: Got endpoints: latency-svc-b8ngp [873.829648ms] Jul 1 00:42:25.791: INFO: Created: latency-svc-cq6jb Jul 1 00:42:25.806: INFO: Got endpoints: latency-svc-cq6jb [869.486729ms] Jul 1 00:42:25.870: INFO: Created: latency-svc-nhpln Jul 1 00:42:25.896: INFO: Got endpoints: latency-svc-nhpln [892.040217ms] Jul 1 00:42:25.942: INFO: Created: latency-svc-h9gxm Jul 1 00:42:26.016: INFO: Got endpoints: latency-svc-h9gxm [970.716612ms] Jul 1 00:42:26.019: INFO: Created: latency-svc-dd9t8 Jul 1 00:42:26.068: INFO: Got endpoints: latency-svc-dd9t8 [992.783752ms] Jul 1 00:42:26.105: INFO: Created: latency-svc-s6m64 Jul 1 00:42:26.114: INFO: Got endpoints: latency-svc-s6m64 [966.575008ms] Jul 1 00:42:26.184: INFO: Created: latency-svc-6f6rx Jul 1 00:42:26.186: INFO: Got endpoints: latency-svc-6f6rx [996.133528ms] Jul 1 00:42:26.346: INFO: Created: latency-svc-5cwtw Jul 1 00:42:26.349: INFO: Got endpoints: latency-svc-5cwtw [1.116351358s] Jul 1 00:42:26.374: INFO: Created: latency-svc-xxhjj Jul 1 00:42:26.384: INFO: Got endpoints: latency-svc-xxhjj [1.097220639s] Jul 1 00:42:26.411: INFO: Created: latency-svc-bbqbd Jul 1 00:42:26.423: INFO: Got endpoints: latency-svc-bbqbd [1.06831113s] Jul 1 00:42:26.478: INFO: Created: latency-svc-zvnqc Jul 1 00:42:26.493: INFO: Got endpoints: latency-svc-zvnqc [1.059166736s] Jul 1 00:42:26.535: INFO: Created: latency-svc-kqdkx Jul 1 00:42:26.569: INFO: Got endpoints: latency-svc-kqdkx [1.077337725s] Jul 1 00:42:26.640: INFO: Created: latency-svc-29njp Jul 1 00:42:26.643: INFO: Got endpoints: latency-svc-29njp [1.07609342s] Jul 1 00:42:26.697: INFO: Created: latency-svc-6fd6t Jul 1 00:42:26.717: INFO: Got endpoints: latency-svc-6fd6t [1.098689582s] Jul 1 00:42:26.771: INFO: Created: latency-svc-6697t Jul 1 00:42:26.800: INFO: Got endpoints: latency-svc-6697t [1.088439268s] Jul 1 00:42:26.801: INFO: Created: latency-svc-f6bs6 Jul 1 00:42:26.836: INFO: Got endpoints: latency-svc-f6bs6 [1.074279031s] Jul 1 00:42:26.903: INFO: Created: latency-svc-twv65 Jul 1 00:42:26.930: INFO: Got endpoints: latency-svc-twv65 [1.124324261s] Jul 1 00:42:26.931: INFO: Created: latency-svc-4x879 Jul 1 00:42:26.961: INFO: Got endpoints: latency-svc-4x879 [1.064573511s] Jul 1 00:42:26.997: INFO: Created: latency-svc-72rv7 Jul 1 00:42:27.040: INFO: Got endpoints: latency-svc-72rv7 [1.023675566s] Jul 1 00:42:27.058: INFO: Created: latency-svc-sb2j2 Jul 1 00:42:27.093: INFO: Got endpoints: latency-svc-sb2j2 [1.025261026s] Jul 1 00:42:27.136: INFO: Created: latency-svc-bdcd2 Jul 1 00:42:27.178: INFO: Got endpoints: latency-svc-bdcd2 [1.063516548s] Jul 1 00:42:27.181: INFO: Created: latency-svc-r7ld7 Jul 1 00:42:27.193: INFO: Got endpoints: latency-svc-r7ld7 [1.006896432s] Jul 1 00:42:27.212: INFO: Created: latency-svc-89t5k Jul 1 00:42:27.230: INFO: Got endpoints: latency-svc-89t5k [880.715036ms] Jul 1 00:42:27.248: INFO: Created: latency-svc-qzcgz Jul 1 00:42:27.266: INFO: Got endpoints: latency-svc-qzcgz [881.659715ms] Jul 1 00:42:27.306: INFO: Created: latency-svc-dvttq Jul 1 00:42:27.310: INFO: Got endpoints: latency-svc-dvttq [886.639277ms] Jul 1 00:42:27.345: INFO: Created: latency-svc-pvljc Jul 1 00:42:27.357: INFO: Got endpoints: latency-svc-pvljc [864.163747ms] Jul 1 00:42:27.387: INFO: Created: latency-svc-zzdj6 Jul 1 00:42:27.442: INFO: Got endpoints: latency-svc-zzdj6 [872.946189ms] Jul 1 00:42:27.452: INFO: Created: latency-svc-8cmf9 Jul 1 00:42:27.465: INFO: Got endpoints: latency-svc-8cmf9 [822.447906ms] Jul 1 00:42:27.495: INFO: Created: latency-svc-7nkbg Jul 1 00:42:27.514: INFO: Got endpoints: latency-svc-7nkbg [796.991891ms] Jul 1 00:42:27.538: INFO: Created: latency-svc-7gp9k Jul 1 00:42:27.575: INFO: Got endpoints: latency-svc-7gp9k [775.003711ms] Jul 1 00:42:27.591: INFO: Created: latency-svc-2jb9z Jul 1 00:42:27.627: INFO: Got endpoints: latency-svc-2jb9z [791.461247ms] Jul 1 00:42:27.670: INFO: Created: latency-svc-949n7 Jul 1 00:42:27.711: INFO: Got endpoints: latency-svc-949n7 [780.699467ms] Jul 1 00:42:27.729: INFO: Created: latency-svc-77gvs Jul 1 00:42:27.760: INFO: Got endpoints: latency-svc-77gvs [799.134266ms] Jul 1 00:42:27.795: INFO: Created: latency-svc-9v2qj Jul 1 00:42:27.810: INFO: Got endpoints: latency-svc-9v2qj [769.448702ms] Jul 1 00:42:27.872: INFO: Created: latency-svc-7ztdh Jul 1 00:42:27.914: INFO: Got endpoints: latency-svc-7ztdh [820.897929ms] Jul 1 00:42:27.980: INFO: Created: latency-svc-r2bn6 Jul 1 00:42:28.020: INFO: Got endpoints: latency-svc-r2bn6 [842.462115ms] Jul 1 00:42:28.078: INFO: Created: latency-svc-cpdrd Jul 1 00:42:28.113: INFO: Got endpoints: latency-svc-cpdrd [919.081874ms] Jul 1 00:42:28.118: INFO: Created: latency-svc-v5g5f Jul 1 00:42:28.134: INFO: Got endpoints: latency-svc-v5g5f [904.495539ms] Jul 1 00:42:28.154: INFO: Created: latency-svc-nlmkx Jul 1 00:42:28.165: INFO: Got endpoints: latency-svc-nlmkx [899.231155ms] Jul 1 00:42:28.210: INFO: Created: latency-svc-8gstr Jul 1 00:42:28.286: INFO: Got endpoints: latency-svc-8gstr [976.673334ms] Jul 1 00:42:28.357: INFO: Created: latency-svc-w6g8c Jul 1 00:42:28.364: INFO: Got endpoints: latency-svc-w6g8c [1.007272191s] Jul 1 00:42:28.418: INFO: Created: latency-svc-gtt7d Jul 1 00:42:28.420: INFO: Got endpoints: latency-svc-gtt7d [978.53966ms] Jul 1 00:42:28.449: INFO: Created: latency-svc-5wxtn Jul 1 00:42:28.461: INFO: Got endpoints: latency-svc-5wxtn [995.524776ms] Jul 1 00:42:28.485: INFO: Created: latency-svc-g5tlj Jul 1 00:42:28.504: INFO: Got endpoints: latency-svc-g5tlj [989.140496ms] Jul 1 00:42:28.549: INFO: Created: latency-svc-kjcwz Jul 1 00:42:28.582: INFO: Got endpoints: latency-svc-kjcwz [1.007114733s] Jul 1 00:42:28.582: INFO: Created: latency-svc-nwlbh Jul 1 00:42:28.618: INFO: Got endpoints: latency-svc-nwlbh [990.314922ms] Jul 1 00:42:28.701: INFO: Created: latency-svc-pwc56 Jul 1 00:42:28.703: INFO: Got endpoints: latency-svc-pwc56 [991.983424ms] Jul 1 00:42:28.798: INFO: Created: latency-svc-tmb5k Jul 1 00:42:28.873: INFO: Got endpoints: latency-svc-tmb5k [1.112983539s] Jul 1 00:42:28.929: INFO: Created: latency-svc-wdfs4 Jul 1 00:42:28.961: INFO: Got endpoints: latency-svc-wdfs4 [1.150895165s] Jul 1 00:42:29.011: INFO: Created: latency-svc-kk8bh Jul 1 00:42:29.031: INFO: Got endpoints: latency-svc-kk8bh [1.116870425s] Jul 1 00:42:29.078: INFO: Created: latency-svc-qndh5 Jul 1 00:42:29.094: INFO: Got endpoints: latency-svc-qndh5 [1.073742116s] Jul 1 00:42:29.172: INFO: Created: latency-svc-xcmn8 Jul 1 00:42:29.193: INFO: Got endpoints: latency-svc-xcmn8 [1.080616147s] Jul 1 00:42:29.223: INFO: Created: latency-svc-z9jf9 Jul 1 00:42:29.256: INFO: Got endpoints: latency-svc-z9jf9 [1.121796907s] Jul 1 00:42:29.316: INFO: Created: latency-svc-ntjlf Jul 1 00:42:29.342: INFO: Got endpoints: latency-svc-ntjlf [1.17715475s] Jul 1 00:42:29.372: INFO: Created: latency-svc-5bv58 Jul 1 00:42:29.384: INFO: Got endpoints: latency-svc-5bv58 [1.097261839s] Jul 1 00:42:29.409: INFO: Created: latency-svc-8mhx7 Jul 1 00:42:29.442: INFO: Got endpoints: latency-svc-8mhx7 [1.077300686s] Jul 1 00:42:29.459: INFO: Created: latency-svc-6kh42 Jul 1 00:42:29.474: INFO: Got endpoints: latency-svc-6kh42 [1.053382748s] Jul 1 00:42:29.474: INFO: Latencies: [99.874335ms 154.101264ms 208.299539ms 348.558954ms 382.857748ms 482.558989ms 539.391408ms 581.417799ms 743.909106ms 769.448702ms 775.003711ms 779.412226ms 780.699467ms 791.461247ms 792.717624ms 796.991891ms 799.134266ms 804.686063ms 810.108939ms 814.3802ms 819.161131ms 820.897929ms 822.447906ms 827.029487ms 831.801814ms 833.702096ms 834.522688ms 838.795061ms 842.462115ms 842.526467ms 855.235783ms 859.295784ms 864.163747ms 869.486729ms 872.946189ms 873.829648ms 880.715036ms 881.659715ms 886.639277ms 892.040217ms 893.598967ms 894.033231ms 899.231155ms 901.828133ms 904.495539ms 905.989591ms 917.56532ms 917.651338ms 919.081874ms 919.321711ms 923.000803ms 923.866723ms 929.681167ms 932.488757ms 935.439772ms 935.881077ms 938.157284ms 941.05518ms 945.103152ms 948.007721ms 949.614003ms 952.423573ms 955.365217ms 959.119336ms 959.458749ms 959.497631ms 961.006461ms 963.726676ms 963.756721ms 965.262716ms 965.281402ms 966.387817ms 966.575008ms 967.746051ms 970.716612ms 970.879069ms 970.899175ms 971.453962ms 976.673334ms 977.530404ms 978.033483ms 978.53966ms 978.686867ms 978.877486ms 978.883438ms 978.927187ms 980.139732ms 981.720353ms 984.38045ms 989.140496ms 989.616568ms 989.669216ms 990.314922ms 990.667746ms 991.983424ms 992.783752ms 993.544669ms 995.333229ms 995.357024ms 995.524776ms 995.706242ms 995.954497ms 996.133528ms 1.001774867s 1.003794083s 1.006896432s 1.007114733s 1.007272191s 1.009872439s 1.014519057s 1.018452881s 1.019180302s 1.023675566s 1.025261026s 1.025678226s 1.026247271s 1.026670948s 1.030187607s 1.031534066s 1.031581448s 1.036091118s 1.037019779s 1.037068166s 1.03740172s 1.043647397s 1.048322237s 1.049256262s 1.051823645s 1.053382748s 1.053944054s 1.05492935s 1.059166736s 1.059352465s 1.061947027s 1.063516548s 1.064573511s 1.06831113s 1.073742116s 1.074279031s 1.07609342s 1.077300686s 1.077337725s 1.080616147s 1.0818896s 1.088439268s 1.097220639s 1.097261839s 1.098689582s 1.105156546s 1.112983539s 1.116351358s 1.116870425s 1.121796907s 1.124324261s 1.134597976s 1.150895165s 1.159046002s 1.17715475s 1.243470233s 1.283118694s 1.318961588s 1.327834453s 1.329210894s 1.341680729s 1.354307423s 1.356237576s 1.370784865s 1.461610926s 1.494186283s 1.504923388s 1.536840644s 1.58636923s 1.595571812s 1.611103664s 1.714320497s 1.745079991s 1.762667825s 1.822438051s 1.837368573s 1.840319985s 1.893210839s 1.893988022s 1.923429764s 1.948162175s 2.012327268s 2.025638836s 2.050807494s 2.142008134s 2.180638764s 2.198708296s 2.219094946s 2.25368262s 2.288807386s 2.317140324s 2.326273352s 2.348980099s 2.365176667s 2.365600068s 2.390618891s 2.398986168s] Jul 1 00:42:29.474: INFO: 50 %ile: 995.706242ms Jul 1 00:42:29.474: INFO: 90 %ile: 1.893210839s Jul 1 00:42:29.474: INFO: 99 %ile: 2.390618891s Jul 1 00:42:29.474: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:42:29.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9308" for this suite. • [SLOW TEST:20.195 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":294,"completed":198,"skipped":3359,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:42:29.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:42:42.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9094" for this suite. • [SLOW TEST:13.409 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":294,"completed":199,"skipped":3377,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:42:42.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jul 1 00:42:47.248: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1759 PodName:var-expansion-e73aadbd-92d0-458e-81f2-0660fde8c5f1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:42:47.248: INFO: >>> kubeConfig: /root/.kube/config I0701 00:42:47.288468 8 log.go:172] (0xc000b96160) (0xc001742be0) Create stream I0701 00:42:47.288491 8 log.go:172] (0xc000b96160) (0xc001742be0) Stream added, broadcasting: 1 I0701 00:42:47.290159 8 log.go:172] (0xc000b96160) Reply frame received for 1 I0701 00:42:47.290186 8 log.go:172] (0xc000b96160) (0xc001742d20) Create stream I0701 00:42:47.290199 8 log.go:172] (0xc000b96160) (0xc001742d20) Stream added, broadcasting: 3 I0701 00:42:47.290900 8 log.go:172] (0xc000b96160) Reply frame received for 3 I0701 00:42:47.290930 8 log.go:172] (0xc000b96160) (0xc00266e140) Create stream I0701 00:42:47.290942 8 log.go:172] (0xc000b96160) (0xc00266e140) Stream added, broadcasting: 5 I0701 00:42:47.291717 8 log.go:172] (0xc000b96160) Reply frame received for 5 I0701 00:42:47.450713 8 log.go:172] (0xc000b96160) Data frame received for 3 I0701 00:42:47.450734 8 log.go:172] (0xc001742d20) (3) Data frame handling I0701 00:42:47.450786 8 log.go:172] (0xc000b96160) Data frame received for 5 I0701 00:42:47.450809 8 log.go:172] (0xc00266e140) (5) Data frame handling I0701 00:42:47.452596 8 log.go:172] (0xc000b96160) Data frame received for 1 I0701 00:42:47.452644 8 log.go:172] (0xc001742be0) (1) Data frame handling I0701 00:42:47.452677 8 log.go:172] (0xc001742be0) (1) Data frame sent I0701 00:42:47.452706 8 log.go:172] (0xc000b96160) (0xc001742be0) Stream removed, broadcasting: 1 I0701 00:42:47.452736 8 log.go:172] (0xc000b96160) Go away received I0701 00:42:47.452814 8 log.go:172] (0xc000b96160) (0xc001742be0) Stream removed, broadcasting: 1 I0701 00:42:47.452849 8 log.go:172] (0xc000b96160) (0xc001742d20) Stream removed, broadcasting: 3 I0701 00:42:47.452875 8 log.go:172] (0xc000b96160) (0xc00266e140) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jul 1 00:42:47.464: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1759 PodName:var-expansion-e73aadbd-92d0-458e-81f2-0660fde8c5f1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 00:42:47.464: INFO: >>> kubeConfig: /root/.kube/config I0701 00:42:47.526166 8 log.go:172] (0xc000b96790) (0xc0017434a0) Create stream I0701 00:42:47.526188 8 log.go:172] (0xc000b96790) (0xc0017434a0) Stream added, broadcasting: 1 I0701 00:42:47.527557 8 log.go:172] (0xc000b96790) Reply frame received for 1 I0701 00:42:47.527592 8 log.go:172] (0xc000b96790) (0xc00121ac80) Create stream I0701 00:42:47.527603 8 log.go:172] (0xc000b96790) (0xc00121ac80) Stream added, broadcasting: 3 I0701 00:42:47.528440 8 log.go:172] (0xc000b96790) Reply frame received for 3 I0701 00:42:47.528476 8 log.go:172] (0xc000b96790) (0xc00121ae60) Create stream I0701 00:42:47.528497 8 log.go:172] (0xc000b96790) (0xc00121ae60) Stream added, broadcasting: 5 I0701 00:42:47.529094 8 log.go:172] (0xc000b96790) Reply frame received for 5 I0701 00:42:47.627869 8 log.go:172] (0xc000b96790) Data frame received for 3 I0701 00:42:47.627900 8 log.go:172] (0xc00121ac80) (3) Data frame handling I0701 00:42:47.627920 8 log.go:172] (0xc000b96790) Data frame received for 5 I0701 00:42:47.627934 8 log.go:172] (0xc00121ae60) (5) Data frame handling I0701 00:42:47.628785 8 log.go:172] (0xc000b96790) Data frame received for 1 I0701 00:42:47.628803 8 log.go:172] (0xc0017434a0) (1) Data frame handling I0701 00:42:47.628819 8 log.go:172] (0xc0017434a0) (1) Data frame sent I0701 00:42:47.628831 8 log.go:172] (0xc000b96790) (0xc0017434a0) Stream removed, broadcasting: 1 I0701 00:42:47.628853 8 log.go:172] (0xc000b96790) Go away received I0701 00:42:47.629032 8 log.go:172] (0xc000b96790) (0xc0017434a0) Stream removed, broadcasting: 1 I0701 00:42:47.629049 8 log.go:172] (0xc000b96790) (0xc00121ac80) Stream removed, broadcasting: 3 I0701 00:42:47.629063 8 log.go:172] (0xc000b96790) (0xc00121ae60) Stream removed, broadcasting: 5 STEP: updating the annotation value Jul 1 00:42:48.160: INFO: Successfully updated pod "var-expansion-e73aadbd-92d0-458e-81f2-0660fde8c5f1" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jul 1 00:42:48.175: INFO: Deleting pod "var-expansion-e73aadbd-92d0-458e-81f2-0660fde8c5f1" in namespace "var-expansion-1759" Jul 1 00:42:48.182: INFO: Wait up to 5m0s for pod "var-expansion-e73aadbd-92d0-458e-81f2-0660fde8c5f1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:43:26.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1759" for this suite. • [SLOW TEST:43.370 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":294,"completed":200,"skipped":3384,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:43:26.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:43:26.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9213" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":294,"completed":201,"skipped":3392,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:43:26.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 00:43:27.194: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 00:43:29.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:43:31.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161007, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:43:34.246: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jul 1 00:43:38.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-6905 to-be-attached-pod -i -c=container1' Jul 1 00:43:38.459: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:43:38.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6905" for this suite. STEP: Destroying namespace "webhook-6905-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.242 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":294,"completed":202,"skipped":3394,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:43:38.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-1d1a91b6-6a00-4be1-a50b-4789b1678aa7 STEP: Creating a pod to test consume configMaps Jul 1 00:43:38.792: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397" in namespace "configmap-1065" to be "Succeeded or Failed" Jul 1 00:43:38.808: INFO: Pod "pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397": Phase="Pending", Reason="", readiness=false. Elapsed: 15.437504ms Jul 1 00:43:40.825: INFO: Pod "pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032940735s Jul 1 00:43:42.862: INFO: Pod "pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069412295s STEP: Saw pod success Jul 1 00:43:42.862: INFO: Pod "pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397" satisfied condition "Succeeded or Failed" Jul 1 00:43:42.864: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397 container configmap-volume-test: STEP: delete the pod Jul 1 00:43:42.914: INFO: Waiting for pod pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397 to disappear Jul 1 00:43:42.931: INFO: Pod pod-configmaps-2d6a479d-c742-41ba-883a-9610d0b72397 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:43:42.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1065" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":294,"completed":203,"skipped":3394,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:43:42.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 1 00:43:50.313: INFO: 3 pods remaining Jul 1 00:43:50.313: INFO: 0 pods has nil DeletionTimestamp Jul 1 00:43:50.313: INFO: Jul 1 00:43:51.867: INFO: 0 pods remaining Jul 1 00:43:51.867: INFO: 0 pods has nil DeletionTimestamp Jul 1 00:43:51.867: INFO: STEP: Gathering metrics W0701 00:43:52.438510 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 00:43:52.438: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:43:52.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2961" for this suite. • [SLOW TEST:10.259 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":294,"completed":204,"skipped":3443,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:43:53.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 1 00:43:58.634: INFO: Successfully updated pod "annotationupdatea87e5575-4ff1-4b07-b894-3e63ab75414a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:00.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4642" for this suite. • [SLOW TEST:7.461 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":294,"completed":205,"skipped":3447,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:00.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 1 00:44:00.743: INFO: Waiting up to 5m0s for pod "pod-5224072d-05aa-4194-bc66-2d07beaddc61" in namespace "emptydir-5177" to be "Succeeded or Failed" Jul 1 00:44:00.747: INFO: Pod "pod-5224072d-05aa-4194-bc66-2d07beaddc61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657189ms Jul 1 00:44:02.751: INFO: Pod "pod-5224072d-05aa-4194-bc66-2d07beaddc61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007896633s Jul 1 00:44:04.755: INFO: Pod "pod-5224072d-05aa-4194-bc66-2d07beaddc61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012267423s STEP: Saw pod success Jul 1 00:44:04.755: INFO: Pod "pod-5224072d-05aa-4194-bc66-2d07beaddc61" satisfied condition "Succeeded or Failed" Jul 1 00:44:04.759: INFO: Trying to get logs from node latest-worker2 pod pod-5224072d-05aa-4194-bc66-2d07beaddc61 container test-container: STEP: delete the pod Jul 1 00:44:04.820: INFO: Waiting for pod pod-5224072d-05aa-4194-bc66-2d07beaddc61 to disappear Jul 1 00:44:04.837: INFO: Pod pod-5224072d-05aa-4194-bc66-2d07beaddc61 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:04.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5177" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":206,"skipped":3453,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:04.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 1 00:44:04.959: INFO: Waiting up to 5m0s for pod "pod-48a98dbe-d052-4eec-aadd-4f2db94db125" in namespace "emptydir-6336" to be "Succeeded or Failed" Jul 1 00:44:04.974: INFO: Pod "pod-48a98dbe-d052-4eec-aadd-4f2db94db125": Phase="Pending", Reason="", readiness=false. Elapsed: 14.603573ms Jul 1 00:44:06.988: INFO: Pod "pod-48a98dbe-d052-4eec-aadd-4f2db94db125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028667813s Jul 1 00:44:08.993: INFO: Pod "pod-48a98dbe-d052-4eec-aadd-4f2db94db125": Phase="Running", Reason="", readiness=true. Elapsed: 4.033024619s Jul 1 00:44:10.995: INFO: Pod "pod-48a98dbe-d052-4eec-aadd-4f2db94db125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035153186s STEP: Saw pod success Jul 1 00:44:10.995: INFO: Pod "pod-48a98dbe-d052-4eec-aadd-4f2db94db125" satisfied condition "Succeeded or Failed" Jul 1 00:44:10.997: INFO: Trying to get logs from node latest-worker pod pod-48a98dbe-d052-4eec-aadd-4f2db94db125 container test-container: STEP: delete the pod Jul 1 00:44:11.061: INFO: Waiting for pod pod-48a98dbe-d052-4eec-aadd-4f2db94db125 to disappear Jul 1 00:44:11.072: INFO: Pod pod-48a98dbe-d052-4eec-aadd-4f2db94db125 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:11.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6336" for this suite. • [SLOW TEST:6.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":207,"skipped":3464,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:11.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:44:15.277: INFO: Waiting up to 5m0s for pod "client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b" in namespace "pods-3604" to be "Succeeded or Failed" Jul 1 00:44:15.292: INFO: Pod "client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.926154ms Jul 1 00:44:17.295: INFO: Pod "client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017934954s Jul 1 00:44:19.299: INFO: Pod "client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b": Phase="Running", Reason="", readiness=true. Elapsed: 4.021961806s Jul 1 00:44:21.413: INFO: Pod "client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13507511s STEP: Saw pod success Jul 1 00:44:21.413: INFO: Pod "client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b" satisfied condition "Succeeded or Failed" Jul 1 00:44:21.415: INFO: Trying to get logs from node latest-worker pod client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b container env3cont: STEP: delete the pod Jul 1 00:44:21.858: INFO: Waiting for pod client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b to disappear Jul 1 00:44:21.930: INFO: Pod client-envvars-78bba0a3-ad06-49d2-9c48-cca695cd483b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:21.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3604" for this suite. • [SLOW TEST:10.861 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":294,"completed":208,"skipped":3479,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:21.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 1 00:44:26.764: INFO: Successfully updated pod "pod-update-activedeadlineseconds-40270e78-a137-438c-ae69-5e326e1d43fc" Jul 1 00:44:26.764: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-40270e78-a137-438c-ae69-5e326e1d43fc" in namespace "pods-431" to be "terminated due to deadline exceeded" Jul 1 00:44:26.791: INFO: Pod "pod-update-activedeadlineseconds-40270e78-a137-438c-ae69-5e326e1d43fc": Phase="Running", Reason="", readiness=true. Elapsed: 27.33488ms Jul 1 00:44:28.796: INFO: Pod "pod-update-activedeadlineseconds-40270e78-a137-438c-ae69-5e326e1d43fc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.031748449s Jul 1 00:44:28.796: INFO: Pod "pod-update-activedeadlineseconds-40270e78-a137-438c-ae69-5e326e1d43fc" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:28.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-431" for this suite. • [SLOW TEST:6.867 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":294,"completed":209,"skipped":3480,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:28.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Jul 1 00:44:28.912: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix688326621/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:29.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2736" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":294,"completed":210,"skipped":3482,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:29.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 1 00:44:29.767: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 1 00:44:31.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161069, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161069, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161070, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161069, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:44:33.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161069, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161069, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161070, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161069, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 00:44:36.822: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:44:36.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:37.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9341" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.058 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":294,"completed":211,"skipped":3504,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:38.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:44:38.154: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365" in namespace "downward-api-3213" to be "Succeeded or Failed" Jul 1 00:44:38.158: INFO: Pod "downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365": Phase="Pending", Reason="", readiness=false. Elapsed: 3.979303ms Jul 1 00:44:40.162: INFO: Pod "downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008412795s Jul 1 00:44:42.263: INFO: Pod "downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109523487s STEP: Saw pod success Jul 1 00:44:42.263: INFO: Pod "downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365" satisfied condition "Succeeded or Failed" Jul 1 00:44:42.267: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365 container client-container: STEP: delete the pod Jul 1 00:44:42.348: INFO: Waiting for pod downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365 to disappear Jul 1 00:44:42.431: INFO: Pod downwardapi-volume-ef01e784-2251-476a-8b70-8d62deedd365 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:42.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3213" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":212,"skipped":3512,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:42.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4f36f9b5-d9b6-47b9-9639-f7536900897f STEP: Creating a pod to test consume secrets Jul 1 00:44:42.657: INFO: Waiting up to 5m0s for pod "pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc" in namespace "secrets-3115" to be "Succeeded or Failed" Jul 1 00:44:42.761: INFO: Pod "pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc": Phase="Pending", Reason="", readiness=false. Elapsed: 103.705672ms Jul 1 00:44:44.764: INFO: Pod "pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10670745s Jul 1 00:44:46.768: INFO: Pod "pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110905907s STEP: Saw pod success Jul 1 00:44:46.768: INFO: Pod "pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc" satisfied condition "Succeeded or Failed" Jul 1 00:44:46.772: INFO: Trying to get logs from node latest-worker pod pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc container secret-volume-test: STEP: delete the pod Jul 1 00:44:46.936: INFO: Waiting for pod pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc to disappear Jul 1 00:44:46.971: INFO: Pod pod-secrets-b8d60c1d-3031-4443-a1a3-81b542cf33dc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:46.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3115" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":213,"skipped":3512,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:46.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:44:47.135: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:44:51.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4079" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":294,"completed":214,"skipped":3582,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:44:51.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:44:51.603: INFO: Creating ReplicaSet my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5 Jul 1 00:44:51.832: INFO: Pod name my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5: Found 0 pods out of 1 Jul 1 00:44:56.837: INFO: Pod name my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5: Found 1 pods out of 1 Jul 1 00:44:56.837: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5" is running Jul 1 00:44:56.842: INFO: Pod "my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5-vw6ql" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:44:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:44:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:44:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-01 00:44:51 +0000 UTC Reason: Message:}]) Jul 1 00:44:56.842: INFO: Trying to dial the pod Jul 1 00:45:01.870: INFO: Controller my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5: Got expected result from replica 1 [my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5-vw6ql]: "my-hostname-basic-dae73826-0ec9-444f-ad5f-f4320eb0c5e5-vw6ql", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:45:01.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7825" for this suite. • [SLOW TEST:10.603 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":294,"completed":215,"skipped":3582,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:45:01.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3854 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Jul 1 00:45:01.999: INFO: Found 0 stateful pods, waiting for 3 Jul 1 00:45:12.005: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:45:12.005: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:45:12.005: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 1 00:45:22.004: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:45:22.004: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:45:22.004: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 1 00:45:22.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3854 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 00:45:22.479: INFO: stderr: "I0701 00:45:22.315968 2154 log.go:172] (0xc000b19550) (0xc000874d20) Create stream\nI0701 00:45:22.316027 2154 log.go:172] (0xc000b19550) (0xc000874d20) Stream added, broadcasting: 1\nI0701 00:45:22.321864 2154 log.go:172] (0xc000b19550) Reply frame received for 1\nI0701 00:45:22.321925 2154 log.go:172] (0xc000b19550) (0xc000867900) Create stream\nI0701 00:45:22.321940 2154 log.go:172] (0xc000b19550) (0xc000867900) Stream added, broadcasting: 3\nI0701 00:45:22.323247 2154 log.go:172] (0xc000b19550) Reply frame received for 3\nI0701 00:45:22.323299 2154 log.go:172] (0xc000b19550) (0xc000860960) Create stream\nI0701 00:45:22.323322 2154 log.go:172] (0xc000b19550) (0xc000860960) Stream added, broadcasting: 5\nI0701 00:45:22.324097 2154 log.go:172] (0xc000b19550) Reply frame received for 5\nI0701 00:45:22.406344 2154 log.go:172] (0xc000b19550) Data frame received for 5\nI0701 00:45:22.406371 2154 log.go:172] (0xc000860960) (5) Data frame handling\nI0701 00:45:22.406390 2154 log.go:172] (0xc000860960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 00:45:22.471630 2154 log.go:172] (0xc000b19550) Data frame received for 5\nI0701 00:45:22.471692 2154 log.go:172] (0xc000860960) (5) Data frame handling\nI0701 00:45:22.471726 2154 log.go:172] (0xc000b19550) Data frame received for 3\nI0701 00:45:22.471751 2154 log.go:172] (0xc000867900) (3) Data frame handling\nI0701 00:45:22.471776 2154 log.go:172] (0xc000867900) (3) Data frame sent\nI0701 00:45:22.471797 2154 log.go:172] (0xc000b19550) Data frame received for 3\nI0701 00:45:22.471818 2154 log.go:172] (0xc000867900) (3) Data frame handling\nI0701 00:45:22.474030 2154 log.go:172] (0xc000b19550) Data frame received for 1\nI0701 00:45:22.474062 2154 log.go:172] (0xc000874d20) (1) Data frame handling\nI0701 00:45:22.474082 2154 log.go:172] (0xc000874d20) (1) Data frame sent\nI0701 00:45:22.474100 2154 log.go:172] (0xc000b19550) (0xc000874d20) Stream removed, broadcasting: 1\nI0701 00:45:22.474124 2154 log.go:172] (0xc000b19550) Go away received\nI0701 00:45:22.474578 2154 log.go:172] (0xc000b19550) (0xc000874d20) Stream removed, broadcasting: 1\nI0701 00:45:22.474601 2154 log.go:172] (0xc000b19550) (0xc000867900) Stream removed, broadcasting: 3\nI0701 00:45:22.474612 2154 log.go:172] (0xc000b19550) (0xc000860960) Stream removed, broadcasting: 5\n" Jul 1 00:45:22.479: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 00:45:22.479: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jul 1 00:45:32.514: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 1 00:45:42.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3854 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 00:45:42.779: INFO: stderr: "I0701 00:45:42.684142 2174 log.go:172] (0xc000774160) (0xc000868640) Create stream\nI0701 00:45:42.684231 2174 log.go:172] (0xc000774160) (0xc000868640) Stream added, broadcasting: 1\nI0701 00:45:42.687048 2174 log.go:172] (0xc000774160) Reply frame received for 1\nI0701 00:45:42.687095 2174 log.go:172] (0xc000774160) (0xc0007041e0) Create stream\nI0701 00:45:42.687112 2174 log.go:172] (0xc000774160) (0xc0007041e0) Stream added, broadcasting: 3\nI0701 00:45:42.688238 2174 log.go:172] (0xc000774160) Reply frame received for 3\nI0701 00:45:42.688327 2174 log.go:172] (0xc000774160) (0xc000678fa0) Create stream\nI0701 00:45:42.688343 2174 log.go:172] (0xc000774160) (0xc000678fa0) Stream added, broadcasting: 5\nI0701 00:45:42.689593 2174 log.go:172] (0xc000774160) Reply frame received for 5\nI0701 00:45:42.769397 2174 log.go:172] (0xc000774160) Data frame received for 3\nI0701 00:45:42.769421 2174 log.go:172] (0xc0007041e0) (3) Data frame handling\nI0701 00:45:42.769430 2174 log.go:172] (0xc0007041e0) (3) Data frame sent\nI0701 00:45:42.769438 2174 log.go:172] (0xc000774160) Data frame received for 3\nI0701 00:45:42.769444 2174 log.go:172] (0xc0007041e0) (3) Data frame handling\nI0701 00:45:42.769470 2174 log.go:172] (0xc000774160) Data frame received for 5\nI0701 00:45:42.769498 2174 log.go:172] (0xc000678fa0) (5) Data frame handling\nI0701 00:45:42.769519 2174 log.go:172] (0xc000678fa0) (5) Data frame sent\nI0701 00:45:42.769542 2174 log.go:172] (0xc000774160) Data frame received for 5\nI0701 00:45:42.769559 2174 log.go:172] (0xc000678fa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 00:45:42.770791 2174 log.go:172] (0xc000774160) Data frame received for 1\nI0701 00:45:42.770807 2174 log.go:172] (0xc000868640) (1) Data frame handling\nI0701 00:45:42.770818 2174 log.go:172] (0xc000868640) (1) Data frame sent\nI0701 00:45:42.770939 2174 log.go:172] (0xc000774160) (0xc000868640) Stream removed, broadcasting: 1\nI0701 00:45:42.770961 2174 log.go:172] (0xc000774160) Go away received\nI0701 00:45:42.771335 2174 log.go:172] (0xc000774160) (0xc000868640) Stream removed, broadcasting: 1\nI0701 00:45:42.771358 2174 log.go:172] (0xc000774160) (0xc0007041e0) Stream removed, broadcasting: 3\nI0701 00:45:42.771371 2174 log.go:172] (0xc000774160) (0xc000678fa0) Stream removed, broadcasting: 5\n" Jul 1 00:45:42.779: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 00:45:42.779: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 00:45:52.815: INFO: Waiting for StatefulSet statefulset-3854/ss2 to complete update Jul 1 00:45:52.815: INFO: Waiting for Pod statefulset-3854/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 00:45:52.815: INFO: Waiting for Pod statefulset-3854/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 00:45:52.815: INFO: Waiting for Pod statefulset-3854/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 00:46:02.824: INFO: Waiting for StatefulSet statefulset-3854/ss2 to complete update Jul 1 00:46:02.824: INFO: Waiting for Pod statefulset-3854/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 00:46:02.824: INFO: Waiting for Pod statefulset-3854/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jul 1 00:46:12.824: INFO: Waiting for StatefulSet statefulset-3854/ss2 to complete update Jul 1 00:46:12.824: INFO: Waiting for Pod statefulset-3854/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jul 1 00:46:22.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3854 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 00:46:23.083: INFO: stderr: "I0701 00:46:22.952289 2196 log.go:172] (0xc000a1f3f0) (0xc000ae45a0) Create stream\nI0701 00:46:22.952373 2196 log.go:172] (0xc000a1f3f0) (0xc000ae45a0) Stream added, broadcasting: 1\nI0701 00:46:22.956158 2196 log.go:172] (0xc000a1f3f0) Reply frame received for 1\nI0701 00:46:22.956208 2196 log.go:172] (0xc000a1f3f0) (0xc000524a00) Create stream\nI0701 00:46:22.956224 2196 log.go:172] (0xc000a1f3f0) (0xc000524a00) Stream added, broadcasting: 3\nI0701 00:46:22.957043 2196 log.go:172] (0xc000a1f3f0) Reply frame received for 3\nI0701 00:46:22.957080 2196 log.go:172] (0xc000a1f3f0) (0xc0004e40a0) Create stream\nI0701 00:46:22.957092 2196 log.go:172] (0xc000a1f3f0) (0xc0004e40a0) Stream added, broadcasting: 5\nI0701 00:46:22.958094 2196 log.go:172] (0xc000a1f3f0) Reply frame received for 5\nI0701 00:46:23.024590 2196 log.go:172] (0xc000a1f3f0) Data frame received for 5\nI0701 00:46:23.024616 2196 log.go:172] (0xc0004e40a0) (5) Data frame handling\nI0701 00:46:23.024629 2196 log.go:172] (0xc0004e40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 00:46:23.075209 2196 log.go:172] (0xc000a1f3f0) Data frame received for 3\nI0701 00:46:23.075236 2196 log.go:172] (0xc000524a00) (3) Data frame handling\nI0701 00:46:23.075262 2196 log.go:172] (0xc000524a00) (3) Data frame sent\nI0701 00:46:23.075274 2196 log.go:172] (0xc000a1f3f0) Data frame received for 3\nI0701 00:46:23.075290 2196 log.go:172] (0xc000524a00) (3) Data frame handling\nI0701 00:46:23.075480 2196 log.go:172] (0xc000a1f3f0) Data frame received for 5\nI0701 00:46:23.075501 2196 log.go:172] (0xc0004e40a0) (5) Data frame handling\nI0701 00:46:23.077755 2196 log.go:172] (0xc000a1f3f0) Data frame received for 1\nI0701 00:46:23.077845 2196 log.go:172] (0xc000ae45a0) (1) Data frame handling\nI0701 00:46:23.077886 2196 log.go:172] (0xc000ae45a0) (1) Data frame sent\nI0701 00:46:23.077905 2196 log.go:172] (0xc000a1f3f0) (0xc000ae45a0) Stream removed, broadcasting: 1\nI0701 00:46:23.077918 2196 log.go:172] (0xc000a1f3f0) Go away received\nI0701 00:46:23.078282 2196 log.go:172] (0xc000a1f3f0) (0xc000ae45a0) Stream removed, broadcasting: 1\nI0701 00:46:23.078301 2196 log.go:172] (0xc000a1f3f0) (0xc000524a00) Stream removed, broadcasting: 3\nI0701 00:46:23.078312 2196 log.go:172] (0xc000a1f3f0) (0xc0004e40a0) Stream removed, broadcasting: 5\n" Jul 1 00:46:23.084: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 00:46:23.084: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 00:46:33.120: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 1 00:46:43.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3854 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 00:46:43.413: INFO: stderr: "I0701 00:46:43.311485 2217 log.go:172] (0xc000c12dc0) (0xc00068ad20) Create stream\nI0701 00:46:43.311543 2217 log.go:172] (0xc000c12dc0) (0xc00068ad20) Stream added, broadcasting: 1\nI0701 00:46:43.315955 2217 log.go:172] (0xc000c12dc0) Reply frame received for 1\nI0701 00:46:43.315990 2217 log.go:172] (0xc000c12dc0) (0xc000681220) Create stream\nI0701 00:46:43.315999 2217 log.go:172] (0xc000c12dc0) (0xc000681220) Stream added, broadcasting: 3\nI0701 00:46:43.316684 2217 log.go:172] (0xc000c12dc0) Reply frame received for 3\nI0701 00:46:43.316715 2217 log.go:172] (0xc000c12dc0) (0xc000392a00) Create stream\nI0701 00:46:43.316725 2217 log.go:172] (0xc000c12dc0) (0xc000392a00) Stream added, broadcasting: 5\nI0701 00:46:43.317673 2217 log.go:172] (0xc000c12dc0) Reply frame received for 5\nI0701 00:46:43.402967 2217 log.go:172] (0xc000c12dc0) Data frame received for 5\nI0701 00:46:43.402996 2217 log.go:172] (0xc000392a00) (5) Data frame handling\nI0701 00:46:43.403006 2217 log.go:172] (0xc000392a00) (5) Data frame sent\nI0701 00:46:43.403014 2217 log.go:172] (0xc000c12dc0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 00:46:43.403043 2217 log.go:172] (0xc000c12dc0) Data frame received for 3\nI0701 00:46:43.403093 2217 log.go:172] (0xc000681220) (3) Data frame handling\nI0701 00:46:43.403127 2217 log.go:172] (0xc000392a00) (5) Data frame handling\nI0701 00:46:43.403174 2217 log.go:172] (0xc000681220) (3) Data frame sent\nI0701 00:46:43.403192 2217 log.go:172] (0xc000c12dc0) Data frame received for 3\nI0701 00:46:43.403203 2217 log.go:172] (0xc000681220) (3) Data frame handling\nI0701 00:46:43.405083 2217 log.go:172] (0xc000c12dc0) Data frame received for 1\nI0701 00:46:43.405281 2217 log.go:172] (0xc00068ad20) (1) Data frame handling\nI0701 00:46:43.405326 2217 log.go:172] (0xc00068ad20) (1) Data frame sent\nI0701 00:46:43.405354 2217 log.go:172] (0xc000c12dc0) (0xc00068ad20) Stream removed, broadcasting: 1\nI0701 00:46:43.405373 2217 log.go:172] (0xc000c12dc0) Go away received\nI0701 00:46:43.405894 2217 log.go:172] (0xc000c12dc0) (0xc00068ad20) Stream removed, broadcasting: 1\nI0701 00:46:43.405915 2217 log.go:172] (0xc000c12dc0) (0xc000681220) Stream removed, broadcasting: 3\nI0701 00:46:43.405925 2217 log.go:172] (0xc000c12dc0) (0xc000392a00) Stream removed, broadcasting: 5\n" Jul 1 00:46:43.413: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 00:46:43.413: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 00:46:53.451: INFO: Waiting for StatefulSet statefulset-3854/ss2 to complete update Jul 1 00:46:53.451: INFO: Waiting for Pod statefulset-3854/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 1 00:46:53.451: INFO: Waiting for Pod statefulset-3854/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 1 00:46:53.451: INFO: Waiting for Pod statefulset-3854/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jul 1 00:47:03.460: INFO: Waiting for StatefulSet statefulset-3854/ss2 to complete update Jul 1 00:47:03.460: INFO: Waiting for Pod statefulset-3854/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 1 00:47:13.460: INFO: Deleting all statefulset in ns statefulset-3854 Jul 1 00:47:13.464: INFO: Scaling statefulset ss2 to 0 Jul 1 00:47:43.503: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 00:47:43.506: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:47:43.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3854" for this suite. • [SLOW TEST:161.652 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":294,"completed":216,"skipped":3587,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:47:43.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jul 1 00:47:43.654: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255790 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:47:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:47:43.654: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255790 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:47:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jul 1 00:47:53.664: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255894 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:47:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:47:53.664: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255894 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:47:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jul 1 00:48:03.673: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255925 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:48:03.673: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255925 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jul 1 00:48:13.700: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255955 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:48:13.700: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-a f47b4d58-be39-440b-94b2-3eba2890505e 17255955 0 2020-07-01 00:47:43 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jul 1 00:48:23.713: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-b 9edd3166-de92-4d40-aa68-9e660584c957 17255985 0 2020-07-01 00:48:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:48:23.713: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-b 9edd3166-de92-4d40-aa68-9e660584c957 17255985 0 2020-07-01 00:48:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jul 1 00:48:33.720: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-b 9edd3166-de92-4d40-aa68-9e660584c957 17256013 0 2020-07-01 00:48:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jul 1 00:48:33.721: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-706 /api/v1/namespaces/watch-706/configmaps/e2e-watch-test-configmap-b 9edd3166-de92-4d40-aa68-9e660584c957 17256013 0 2020-07-01 00:48:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-07-01 00:48:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:48:43.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-706" for this suite. • [SLOW TEST:60.198 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":294,"completed":217,"skipped":3604,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:48:43.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:48:43.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7746" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":294,"completed":218,"skipped":3608,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:48:43.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 00:48:44.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41" in namespace "projected-1432" to be "Succeeded or Failed" Jul 1 00:48:44.099: INFO: Pod "downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41": Phase="Pending", Reason="", readiness=false. Elapsed: 74.861243ms Jul 1 00:48:46.103: INFO: Pod "downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078596266s Jul 1 00:48:48.107: INFO: Pod "downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082712564s STEP: Saw pod success Jul 1 00:48:48.107: INFO: Pod "downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41" satisfied condition "Succeeded or Failed" Jul 1 00:48:48.110: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41 container client-container: STEP: delete the pod Jul 1 00:48:48.146: INFO: Waiting for pod downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41 to disappear Jul 1 00:48:48.212: INFO: Pod downwardapi-volume-e8c9d133-cdd9-4c79-ad18-ec025fe28a41 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:48:48.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1432" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":294,"completed":219,"skipped":3656,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:48:48.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 00:48:52.360: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:48:52.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5236" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":220,"skipped":3678,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:48:52.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:48:52.551: INFO: Create a RollingUpdate DaemonSet Jul 1 00:48:52.555: INFO: Check that daemon pods launch on every node of the cluster Jul 1 00:48:52.560: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:48:52.578: INFO: Number of nodes with available pods: 0 Jul 1 00:48:52.578: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:48:53.626: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:48:53.628: INFO: Number of nodes with available pods: 0 Jul 1 00:48:53.629: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:48:54.675: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:48:54.678: INFO: Number of nodes with available pods: 0 Jul 1 00:48:54.678: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:48:55.583: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:48:55.588: INFO: Number of nodes with available pods: 0 Jul 1 00:48:55.588: INFO: Node latest-worker is running more than one daemon pod Jul 1 00:48:56.582: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:48:56.585: INFO: Number of nodes with available pods: 2 Jul 1 00:48:56.585: INFO: Number of running nodes: 2, number of available pods: 2 Jul 1 00:48:56.585: INFO: Update the DaemonSet to trigger a rollout Jul 1 00:48:56.614: INFO: Updating DaemonSet daemon-set Jul 1 00:49:01.648: INFO: Roll back the DaemonSet before rollout is complete Jul 1 00:49:01.655: INFO: Updating DaemonSet daemon-set Jul 1 00:49:01.655: INFO: Make sure DaemonSet rollback is complete Jul 1 00:49:01.695: INFO: Wrong image for pod: daemon-set-c2bvl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 00:49:01.695: INFO: Pod daemon-set-c2bvl is not available Jul 1 00:49:01.710: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:49:02.716: INFO: Wrong image for pod: daemon-set-c2bvl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jul 1 00:49:02.716: INFO: Pod daemon-set-c2bvl is not available Jul 1 00:49:02.720: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 00:49:03.716: INFO: Pod daemon-set-sjrst is not available Jul 1 00:49:03.720: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5834, will wait for the garbage collector to delete the pods Jul 1 00:49:03.786: INFO: Deleting DaemonSet.extensions daemon-set took: 6.213077ms Jul 1 00:49:04.086: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.258294ms Jul 1 00:49:15.344: INFO: Number of nodes with available pods: 0 Jul 1 00:49:15.344: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 00:49:15.348: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5834/daemonsets","resourceVersion":"17256277"},"items":null} Jul 1 00:49:15.351: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5834/pods","resourceVersion":"17256277"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:49:15.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5834" for this suite. • [SLOW TEST:22.935 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":294,"completed":221,"skipped":3704,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:49:15.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-4mt8 STEP: Creating a pod to test atomic-volume-subpath Jul 1 00:49:15.513: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4mt8" in namespace "subpath-5885" to be "Succeeded or Failed" Jul 1 00:49:15.531: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.915432ms Jul 1 00:49:17.595: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082413959s Jul 1 00:49:19.600: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 4.087255765s Jul 1 00:49:21.631: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 6.118384763s Jul 1 00:49:23.635: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 8.122323148s Jul 1 00:49:25.639: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 10.126092389s Jul 1 00:49:27.662: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 12.149111092s Jul 1 00:49:29.714: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 14.201298096s Jul 1 00:49:31.734: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 16.220483607s Jul 1 00:49:33.737: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 18.223821963s Jul 1 00:49:35.740: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 20.227375033s Jul 1 00:49:37.744: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Running", Reason="", readiness=true. Elapsed: 22.231147995s Jul 1 00:49:39.749: INFO: Pod "pod-subpath-test-projected-4mt8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.236232875s STEP: Saw pod success Jul 1 00:49:39.749: INFO: Pod "pod-subpath-test-projected-4mt8" satisfied condition "Succeeded or Failed" Jul 1 00:49:39.752: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-4mt8 container test-container-subpath-projected-4mt8: STEP: delete the pod Jul 1 00:49:39.788: INFO: Waiting for pod pod-subpath-test-projected-4mt8 to disappear Jul 1 00:49:39.806: INFO: Pod pod-subpath-test-projected-4mt8 no longer exists STEP: Deleting pod pod-subpath-test-projected-4mt8 Jul 1 00:49:39.806: INFO: Deleting pod "pod-subpath-test-projected-4mt8" in namespace "subpath-5885" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:49:39.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5885" for this suite. • [SLOW TEST:24.465 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":294,"completed":222,"skipped":3705,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:49:39.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 1 00:49:39.966: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 1 00:49:45.015: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:49:46.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4785" for this suite. • [SLOW TEST:6.333 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":294,"completed":223,"skipped":3713,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:49:46.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 1 00:49:46.272: INFO: Waiting up to 5m0s for pod "downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7" in namespace "downward-api-7890" to be "Succeeded or Failed" Jul 1 00:49:46.303: INFO: Pod "downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.281777ms Jul 1 00:49:48.340: INFO: Pod "downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067608722s Jul 1 00:49:50.344: INFO: Pod "downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07221848s STEP: Saw pod success Jul 1 00:49:50.345: INFO: Pod "downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7" satisfied condition "Succeeded or Failed" Jul 1 00:49:50.347: INFO: Trying to get logs from node latest-worker pod downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7 container dapi-container: STEP: delete the pod Jul 1 00:49:50.668: INFO: Waiting for pod downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7 to disappear Jul 1 00:49:50.671: INFO: Pod downward-api-894449ba-dc56-4cc8-a6ff-a734ad5ca2c7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:49:50.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7890" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":294,"completed":224,"skipped":3741,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:49:50.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:49:50.767: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c5164cde-b224-40e0-a0bf-a38e14b1d999" in namespace "security-context-test-2588" to be "Succeeded or Failed" Jul 1 00:49:50.793: INFO: Pod "busybox-readonly-false-c5164cde-b224-40e0-a0bf-a38e14b1d999": Phase="Pending", Reason="", readiness=false. Elapsed: 26.37783ms Jul 1 00:49:52.798: INFO: Pod "busybox-readonly-false-c5164cde-b224-40e0-a0bf-a38e14b1d999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031420883s Jul 1 00:49:54.802: INFO: Pod "busybox-readonly-false-c5164cde-b224-40e0-a0bf-a38e14b1d999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035434853s Jul 1 00:49:54.802: INFO: Pod "busybox-readonly-false-c5164cde-b224-40e0-a0bf-a38e14b1d999" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:49:54.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2588" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":294,"completed":225,"skipped":3761,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:49:54.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-zzs5z in namespace proxy-3795 I0701 00:49:54.952390 8 runners.go:190] Created replication controller with name: proxy-service-zzs5z, namespace: proxy-3795, replica count: 1 I0701 00:49:56.002803 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:49:57.002995 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:49:58.003938 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:49:59.005220 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:00.005532 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:01.005857 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:02.006110 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:03.006371 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:04.006690 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:05.006932 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:06.007233 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:07.007439 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:08.007654 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0701 00:50:09.007942 8 runners.go:190] proxy-service-zzs5z Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:50:09.011: INFO: setup took 14.105959107s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jul 1 00:50:09.017: INFO: (0) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 6.153928ms) Jul 1 00:50:09.020: INFO: (0) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 8.302829ms) Jul 1 00:50:09.020: INFO: (0) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 8.59564ms) Jul 1 00:50:09.020: INFO: (0) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 8.459801ms) Jul 1 00:50:09.020: INFO: (0) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 8.511213ms) Jul 1 00:50:09.026: INFO: (0) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 14.853804ms) Jul 1 00:50:09.039: INFO: (0) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 27.490147ms) Jul 1 00:50:09.039: INFO: (0) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 27.768661ms) Jul 1 00:50:09.040: INFO: (0) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 28.2041ms) Jul 1 00:50:09.040: INFO: (0) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 28.315204ms) Jul 1 00:50:09.057: INFO: (0) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 45.983369ms) Jul 1 00:50:09.059: INFO: (0) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 47.159455ms) Jul 1 00:50:09.059: INFO: (0) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 47.535045ms) Jul 1 00:50:09.059: INFO: (0) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 47.659196ms) Jul 1 00:50:09.059: INFO: (0) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 48.215967ms) Jul 1 00:50:09.065: INFO: (0) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test<... (200; 4.134812ms) Jul 1 00:50:09.069: INFO: (1) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: ... (200; 4.622037ms) Jul 1 00:50:09.070: INFO: (1) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.806159ms) Jul 1 00:50:09.070: INFO: (1) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.727575ms) Jul 1 00:50:09.070: INFO: (1) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 4.838523ms) Jul 1 00:50:09.070: INFO: (1) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 5.389335ms) Jul 1 00:50:09.070: INFO: (1) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 5.369626ms) Jul 1 00:50:09.071: INFO: (1) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 6.021238ms) Jul 1 00:50:09.071: INFO: (1) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 6.148182ms) Jul 1 00:50:09.071: INFO: (1) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 6.2228ms) Jul 1 00:50:09.071: INFO: (1) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 6.274136ms) Jul 1 00:50:09.078: INFO: (2) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 6.740828ms) Jul 1 00:50:09.078: INFO: (2) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 7.008337ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 7.189754ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 7.347132ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 7.328926ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 7.35908ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 7.339669ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 7.632856ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 7.766258ms) Jul 1 00:50:09.079: INFO: (2) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: ... (200; 5.494186ms) Jul 1 00:50:09.086: INFO: (3) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test<... (200; 5.780596ms) Jul 1 00:50:09.086: INFO: (3) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 5.768859ms) Jul 1 00:50:09.086: INFO: (3) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 5.874787ms) Jul 1 00:50:09.087: INFO: (3) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 6.005804ms) Jul 1 00:50:09.087: INFO: (3) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 5.994476ms) Jul 1 00:50:09.087: INFO: (3) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 6.062611ms) Jul 1 00:50:09.087: INFO: (3) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 6.087394ms) Jul 1 00:50:09.087: INFO: (3) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 6.272541ms) Jul 1 00:50:09.090: INFO: (4) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 3.182584ms) Jul 1 00:50:09.090: INFO: (4) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 3.232276ms) Jul 1 00:50:09.091: INFO: (4) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 4.350488ms) Jul 1 00:50:09.091: INFO: (4) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.414291ms) Jul 1 00:50:09.091: INFO: (4) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 4.469686ms) Jul 1 00:50:09.092: INFO: (4) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 4.853562ms) Jul 1 00:50:09.092: INFO: (4) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test<... (200; 5.077788ms) Jul 1 00:50:09.092: INFO: (4) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 5.200338ms) Jul 1 00:50:09.092: INFO: (4) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 5.155542ms) Jul 1 00:50:09.092: INFO: (4) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 5.257045ms) Jul 1 00:50:09.092: INFO: (4) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 5.455302ms) Jul 1 00:50:09.092: INFO: (4) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 5.580491ms) Jul 1 00:50:09.097: INFO: (5) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test<... (200; 5.25454ms) Jul 1 00:50:09.098: INFO: (5) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 5.284016ms) Jul 1 00:50:09.098: INFO: (5) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 5.405418ms) Jul 1 00:50:09.098: INFO: (5) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 5.558212ms) Jul 1 00:50:09.098: INFO: (5) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 5.576914ms) Jul 1 00:50:09.098: INFO: (5) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 5.858878ms) Jul 1 00:50:09.098: INFO: (5) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 5.925052ms) Jul 1 00:50:09.099: INFO: (5) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 5.786418ms) Jul 1 00:50:09.099: INFO: (5) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 6.052081ms) Jul 1 00:50:09.099: INFO: (5) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 6.036164ms) Jul 1 00:50:09.099: INFO: (5) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 6.106365ms) Jul 1 00:50:09.099: INFO: (5) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 6.236011ms) Jul 1 00:50:09.105: INFO: (6) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 6.488379ms) Jul 1 00:50:09.105: INFO: (6) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 6.455952ms) Jul 1 00:50:09.105: INFO: (6) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 6.497301ms) Jul 1 00:50:09.105: INFO: (6) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 6.467477ms) Jul 1 00:50:09.106: INFO: (6) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 7.08537ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test (200; 7.634562ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 7.733878ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 8.16208ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 8.140086ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 8.223576ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 8.313914ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 8.297903ms) Jul 1 00:50:09.107: INFO: (6) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 8.289617ms) Jul 1 00:50:09.108: INFO: (6) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 8.648087ms) Jul 1 00:50:09.111: INFO: (7) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 2.893301ms) Jul 1 00:50:09.111: INFO: (7) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 3.336741ms) Jul 1 00:50:09.111: INFO: (7) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test (200; 4.349916ms) Jul 1 00:50:09.112: INFO: (7) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 4.489474ms) Jul 1 00:50:09.112: INFO: (7) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 4.460675ms) Jul 1 00:50:09.112: INFO: (7) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 4.588947ms) Jul 1 00:50:09.112: INFO: (7) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 4.525425ms) Jul 1 00:50:09.114: INFO: (7) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 5.941723ms) Jul 1 00:50:09.114: INFO: (7) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 5.983333ms) Jul 1 00:50:09.114: INFO: (7) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 5.988661ms) Jul 1 00:50:09.114: INFO: (7) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 5.99324ms) Jul 1 00:50:09.114: INFO: (7) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 5.974408ms) Jul 1 00:50:09.116: INFO: (8) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 1.812965ms) Jul 1 00:50:09.117: INFO: (8) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: ... (200; 3.331819ms) Jul 1 00:50:09.118: INFO: (8) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 3.859613ms) Jul 1 00:50:09.118: INFO: (8) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 4.475475ms) Jul 1 00:50:09.118: INFO: (8) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.512934ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 4.63035ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 4.560063ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 4.832663ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 4.796498ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 4.780417ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 4.799198ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 4.793829ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 4.925942ms) Jul 1 00:50:09.119: INFO: (8) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 4.929711ms) Jul 1 00:50:09.120: INFO: (8) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 5.913019ms) Jul 1 00:50:09.123: INFO: (9) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 3.442262ms) Jul 1 00:50:09.123: INFO: (9) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 3.478445ms) Jul 1 00:50:09.124: INFO: (9) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test<... (200; 3.840815ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 6.195474ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 6.230958ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 6.305477ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 6.19562ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 6.218326ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 6.270361ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 6.273488ms) Jul 1 00:50:09.126: INFO: (9) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 6.521708ms) Jul 1 00:50:09.127: INFO: (9) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 6.722234ms) Jul 1 00:50:09.127: INFO: (9) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 7.268928ms) Jul 1 00:50:09.127: INFO: (9) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 7.582192ms) Jul 1 00:50:09.132: INFO: (10) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 4.411778ms) Jul 1 00:50:09.132: INFO: (10) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 4.722352ms) Jul 1 00:50:09.132: INFO: (10) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.718809ms) Jul 1 00:50:09.133: INFO: (10) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: ... (200; 6.023095ms) Jul 1 00:50:09.134: INFO: (10) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 6.01175ms) Jul 1 00:50:09.134: INFO: (10) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 6.195149ms) Jul 1 00:50:09.134: INFO: (10) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 6.79776ms) Jul 1 00:50:09.134: INFO: (10) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 6.890643ms) Jul 1 00:50:09.134: INFO: (10) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 6.830752ms) Jul 1 00:50:09.134: INFO: (10) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 6.876317ms) Jul 1 00:50:09.134: INFO: (10) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 6.938613ms) Jul 1 00:50:09.135: INFO: (10) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 7.116576ms) Jul 1 00:50:09.135: INFO: (10) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 7.107773ms) Jul 1 00:50:09.135: INFO: (10) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 7.092509ms) Jul 1 00:50:09.135: INFO: (10) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 7.271751ms) Jul 1 00:50:09.137: INFO: (11) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 2.473208ms) Jul 1 00:50:09.137: INFO: (11) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 2.450431ms) Jul 1 00:50:09.138: INFO: (11) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 2.403933ms) Jul 1 00:50:09.139: INFO: (11) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 4.090173ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 4.476703ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 4.529899ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 4.46629ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 4.769174ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 4.922432ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 5.273228ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 5.226476ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 5.325205ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 5.305019ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 5.40223ms) Jul 1 00:50:09.140: INFO: (11) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test (200; 4.754401ms) Jul 1 00:50:09.145: INFO: (12) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.887445ms) Jul 1 00:50:09.145: INFO: (12) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 4.9371ms) Jul 1 00:50:09.146: INFO: (12) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 5.930008ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 7.462884ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 7.467368ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 7.282317ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 7.550169ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 7.589131ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 7.735403ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 7.741721ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 7.827418ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 7.876549ms) Jul 1 00:50:09.148: INFO: (12) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 7.896131ms) Jul 1 00:50:09.149: INFO: (12) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test<... (200; 4.695298ms) Jul 1 00:50:09.154: INFO: (13) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 5.128136ms) Jul 1 00:50:09.154: INFO: (13) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 5.107554ms) Jul 1 00:50:09.154: INFO: (13) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test (200; 5.086873ms) Jul 1 00:50:09.176: INFO: (13) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 26.76315ms) Jul 1 00:50:09.177: INFO: (13) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 27.70319ms) Jul 1 00:50:09.178: INFO: (13) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 29.07633ms) Jul 1 00:50:09.178: INFO: (13) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 29.430441ms) Jul 1 00:50:09.179: INFO: (13) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 29.669499ms) Jul 1 00:50:09.181: INFO: (13) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 32.505669ms) Jul 1 00:50:09.181: INFO: (13) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 32.547896ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 8.159349ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 8.209971ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 8.268172ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 8.176982ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 8.284764ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 8.271042ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 8.195072ms) Jul 1 00:50:09.190: INFO: (14) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test (200; 30.158072ms) Jul 1 00:50:09.222: INFO: (15) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 30.217561ms) Jul 1 00:50:09.223: INFO: (15) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 31.381649ms) Jul 1 00:50:09.223: INFO: (15) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 31.399659ms) Jul 1 00:50:09.223: INFO: (15) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 31.465893ms) Jul 1 00:50:09.224: INFO: (15) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 31.951295ms) Jul 1 00:50:09.224: INFO: (15) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test<... (200; 32.100241ms) Jul 1 00:50:09.224: INFO: (15) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 32.222297ms) Jul 1 00:50:09.224: INFO: (15) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 32.343899ms) Jul 1 00:50:09.225: INFO: (15) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 33.017994ms) Jul 1 00:50:09.229: INFO: (16) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 4.296194ms) Jul 1 00:50:09.230: INFO: (16) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 5.51231ms) Jul 1 00:50:09.230: INFO: (16) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 5.395759ms) Jul 1 00:50:09.230: INFO: (16) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 5.322057ms) Jul 1 00:50:09.230: INFO: (16) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 5.379425ms) Jul 1 00:50:09.231: INFO: (16) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 5.731766ms) Jul 1 00:50:09.231: INFO: (16) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 5.708715ms) Jul 1 00:50:09.231: INFO: (16) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 5.717111ms) Jul 1 00:50:09.231: INFO: (16) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 5.925563ms) Jul 1 00:50:09.231: INFO: (16) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: test (200; 5.927838ms) Jul 1 00:50:09.231: INFO: (16) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 6.032109ms) Jul 1 00:50:09.231: INFO: (16) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 6.338503ms) Jul 1 00:50:09.232: INFO: (16) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 6.817143ms) Jul 1 00:50:09.232: INFO: (16) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 6.874577ms) Jul 1 00:50:09.232: INFO: (16) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 7.001632ms) Jul 1 00:50:09.234: INFO: (17) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 1.89489ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 4.043448ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.02102ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 4.094828ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 4.108626ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 4.093337ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 4.196926ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 4.199747ms) Jul 1 00:50:09.236: INFO: (17) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 4.469864ms) Jul 1 00:50:09.237: INFO: (17) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: ... (200; 4.515362ms) Jul 1 00:50:09.237: INFO: (17) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 4.595843ms) Jul 1 00:50:09.237: INFO: (17) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 4.972595ms) Jul 1 00:50:09.237: INFO: (17) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 4.956946ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 3.673207ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 3.78808ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 3.843804ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 3.814439ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 3.856016ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 3.962398ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 3.995752ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 4.233003ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 4.310962ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:1080/proxy/: ... (200; 4.297698ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.357689ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 4.325288ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:160/proxy/: foo (200; 4.310342ms) Jul 1 00:50:09.241: INFO: (18) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:443/proxy/: ... (200; 3.888202ms) Jul 1 00:50:09.245: INFO: (19) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj/proxy/: test (200; 3.898308ms) Jul 1 00:50:09.245: INFO: (19) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 3.88385ms) Jul 1 00:50:09.246: INFO: (19) /api/v1/namespaces/proxy-3795/pods/http:proxy-service-zzs5z-q2kgj:162/proxy/: bar (200; 4.16179ms) Jul 1 00:50:09.246: INFO: (19) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:460/proxy/: tls baz (200; 4.257062ms) Jul 1 00:50:09.246: INFO: (19) /api/v1/namespaces/proxy-3795/pods/proxy-service-zzs5z-q2kgj:1080/proxy/: test<... (200; 4.198111ms) Jul 1 00:50:09.246: INFO: (19) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname1/proxy/: foo (200; 4.314247ms) Jul 1 00:50:09.246: INFO: (19) /api/v1/namespaces/proxy-3795/pods/https:proxy-service-zzs5z-q2kgj:462/proxy/: tls qux (200; 4.409859ms) Jul 1 00:50:09.246: INFO: (19) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname1/proxy/: foo (200; 4.650286ms) Jul 1 00:50:09.246: INFO: (19) /api/v1/namespaces/proxy-3795/services/proxy-service-zzs5z:portname2/proxy/: bar (200; 4.719945ms) Jul 1 00:50:09.247: INFO: (19) /api/v1/namespaces/proxy-3795/services/http:proxy-service-zzs5z:portname2/proxy/: bar (200; 5.519694ms) Jul 1 00:50:09.247: INFO: (19) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname1/proxy/: tls baz (200; 5.819008ms) Jul 1 00:50:09.247: INFO: (19) /api/v1/namespaces/proxy-3795/services/https:proxy-service-zzs5z:tlsportname2/proxy/: tls qux (200; 5.89343ms) STEP: deleting ReplicationController proxy-service-zzs5z in namespace proxy-3795, will wait for the garbage collector to delete the pods Jul 1 00:50:09.311: INFO: Deleting ReplicationController proxy-service-zzs5z took: 11.851434ms Jul 1 00:50:09.611: INFO: Terminating ReplicationController proxy-service-zzs5z pods took: 300.264783ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:50:11.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3795" for this suite. • [SLOW TEST:17.011 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":294,"completed":226,"skipped":3833,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:50:11.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-ebb97f85-3f3b-4951-9f97-aef8d29e5187 STEP: Creating a pod to test consume configMaps Jul 1 00:50:11.966: INFO: Waiting up to 5m0s for pod "pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82" in namespace "configmap-9698" to be "Succeeded or Failed" Jul 1 00:50:11.982: INFO: Pod "pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82": Phase="Pending", Reason="", readiness=false. Elapsed: 16.091007ms Jul 1 00:50:13.986: INFO: Pod "pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02008719s Jul 1 00:50:15.991: INFO: Pod "pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024819667s STEP: Saw pod success Jul 1 00:50:15.991: INFO: Pod "pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82" satisfied condition "Succeeded or Failed" Jul 1 00:50:15.994: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82 container configmap-volume-test: STEP: delete the pod Jul 1 00:50:16.036: INFO: Waiting for pod pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82 to disappear Jul 1 00:50:16.096: INFO: Pod pod-configmaps-f51c99cd-eced-4462-9742-4299fc9e2b82 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:50:16.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9698" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":294,"completed":227,"skipped":3850,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:50:16.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Jul 1 00:50:16.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' Jul 1 00:50:16.331: INFO: stderr: "" Jul 1 00:50:16.331: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:50:16.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2814" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":294,"completed":228,"skipped":3856,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:50:16.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 1 00:50:16.416: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Jul 1 00:50:16.985: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 1 00:50:19.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161416, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161416, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161417, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161416, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:50:21.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161416, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161416, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161417, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729161416, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 1 00:50:24.041: INFO: Waited 697.463988ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:50:24.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6471" for this suite. • [SLOW TEST:8.393 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":294,"completed":229,"skipped":3864,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:50:24.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 1 00:50:25.213: INFO: Waiting up to 5m0s for pod "pod-929be420-52e8-4978-a9bf-4ccddd3d0b02" in namespace "emptydir-9650" to be "Succeeded or Failed" Jul 1 00:50:25.363: INFO: Pod "pod-929be420-52e8-4978-a9bf-4ccddd3d0b02": Phase="Pending", Reason="", readiness=false. Elapsed: 150.487618ms Jul 1 00:50:27.387: INFO: Pod "pod-929be420-52e8-4978-a9bf-4ccddd3d0b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173671826s Jul 1 00:50:29.391: INFO: Pod "pod-929be420-52e8-4978-a9bf-4ccddd3d0b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178244089s STEP: Saw pod success Jul 1 00:50:29.391: INFO: Pod "pod-929be420-52e8-4978-a9bf-4ccddd3d0b02" satisfied condition "Succeeded or Failed" Jul 1 00:50:29.394: INFO: Trying to get logs from node latest-worker2 pod pod-929be420-52e8-4978-a9bf-4ccddd3d0b02 container test-container: STEP: delete the pod Jul 1 00:50:29.442: INFO: Waiting for pod pod-929be420-52e8-4978-a9bf-4ccddd3d0b02 to disappear Jul 1 00:50:29.448: INFO: Pod pod-929be420-52e8-4978-a9bf-4ccddd3d0b02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:50:29.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9650" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":230,"skipped":3876,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:50:29.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jul 1 00:50:29.531: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 1 00:50:29.541: INFO: Waiting for terminating namespaces to be deleted... Jul 1 00:50:29.544: INFO: Logging pods the apiserver thinks is on node latest-worker before test Jul 1 00:50:29.549: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.549: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jul 1 00:50:29.549: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.549: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 Jul 1 00:50:29.549: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.549: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:50:29.549: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.549: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 00:50:29.549: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Jul 1 00:50:29.553: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.553: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jul 1 00:50:29.553: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.553: INFO: Container terminate-cmd-rpa ready: true, restart count 2 Jul 1 00:50:29.553: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.553: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 00:50:29.553: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) Jul 1 00:50:29.553: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-79ec3dee-ee38-4632-b3b2-ec5b86597137 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-79ec3dee-ee38-4632-b3b2-ec5b86597137 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-79ec3dee-ee38-4632-b3b2-ec5b86597137 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:55:37.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9493" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.274 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":294,"completed":231,"skipped":3895,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:55:37.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5643, will wait for the garbage collector to delete the pods Jul 1 00:55:43.949: INFO: Deleting Job.batch foo took: 73.999286ms Jul 1 00:55:44.250: INFO: Terminating Job.batch foo pods took: 300.236426ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:56:25.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5643" for this suite. • [SLOW TEST:47.528 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":294,"completed":232,"skipped":3899,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:56:25.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:56:36.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9460" for this suite. • [SLOW TEST:11.169 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":294,"completed":233,"skipped":3904,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:56:36.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0f886359-bf5c-4995-bf76-bba1c43573c8 STEP: Creating a pod to test consume secrets Jul 1 00:56:36.530: INFO: Waiting up to 5m0s for pod "pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d" in namespace "secrets-8761" to be "Succeeded or Failed" Jul 1 00:56:36.548: INFO: Pod "pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.340742ms Jul 1 00:56:38.648: INFO: Pod "pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117420626s Jul 1 00:56:40.652: INFO: Pod "pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121739701s STEP: Saw pod success Jul 1 00:56:40.652: INFO: Pod "pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d" satisfied condition "Succeeded or Failed" Jul 1 00:56:40.656: INFO: Trying to get logs from node latest-worker pod pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d container secret-env-test: STEP: delete the pod Jul 1 00:56:40.733: INFO: Waiting for pod pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d to disappear Jul 1 00:56:40.738: INFO: Pod pod-secrets-c4693a5c-c36d-415b-b194-7485cd5d796d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:56:40.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8761" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":294,"completed":234,"skipped":3910,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:56:40.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-44a4d067-b72b-495a-a556-1b4f5c161aea STEP: Creating a pod to test consume secrets Jul 1 00:56:40.794: INFO: Waiting up to 5m0s for pod "pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a" in namespace "secrets-2547" to be "Succeeded or Failed" Jul 1 00:56:40.798: INFO: Pod "pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.991995ms Jul 1 00:56:42.803: INFO: Pod "pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008440122s Jul 1 00:56:44.807: INFO: Pod "pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012489279s STEP: Saw pod success Jul 1 00:56:44.807: INFO: Pod "pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a" satisfied condition "Succeeded or Failed" Jul 1 00:56:44.810: INFO: Trying to get logs from node latest-worker pod pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a container secret-volume-test: STEP: delete the pod Jul 1 00:56:44.831: INFO: Waiting for pod pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a to disappear Jul 1 00:56:44.835: INFO: Pod pod-secrets-d2c01f60-987e-4083-8265-0ebb4911aa1a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:56:44.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2547" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":235,"skipped":3913,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:56:44.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 1 00:56:44.947: INFO: Waiting up to 5m0s for pod "downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe" in namespace "downward-api-5287" to be "Succeeded or Failed" Jul 1 00:56:44.961: INFO: Pod "downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe": Phase="Pending", Reason="", readiness=false. Elapsed: 14.694492ms Jul 1 00:56:46.965: INFO: Pod "downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018758475s Jul 1 00:56:48.970: INFO: Pod "downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023033653s STEP: Saw pod success Jul 1 00:56:48.970: INFO: Pod "downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe" satisfied condition "Succeeded or Failed" Jul 1 00:56:48.973: INFO: Trying to get logs from node latest-worker pod downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe container dapi-container: STEP: delete the pod Jul 1 00:56:49.008: INFO: Waiting for pod downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe to disappear Jul 1 00:56:49.015: INFO: Pod downward-api-1c71df01-7ed7-4659-b8b4-7055ea6128fe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:56:49.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5287" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":294,"completed":236,"skipped":3944,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:56:49.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Jul 1 00:56:49.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f -' Jul 1 00:56:52.579: INFO: stderr: "" Jul 1 00:56:52.579: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jul 1 00:56:52.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config diff -f -' Jul 1 00:56:54.165: INFO: rc: 1 Jul 1 00:56:54.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete -f -' Jul 1 00:56:54.280: INFO: stderr: "" Jul 1 00:56:54.280: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:56:54.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6109" for this suite. • [SLOW TEST:5.261 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:871 should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":294,"completed":237,"skipped":3968,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:56:54.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 00:56:54.562: INFO: Waiting up to 5m0s for pod "pod-1cd9a012-ff78-4bd4-9707-78a1368b8052" in namespace "emptydir-6256" to be "Succeeded or Failed" Jul 1 00:56:54.578: INFO: Pod "pod-1cd9a012-ff78-4bd4-9707-78a1368b8052": Phase="Pending", Reason="", readiness=false. Elapsed: 16.236015ms Jul 1 00:56:56.792: INFO: Pod "pod-1cd9a012-ff78-4bd4-9707-78a1368b8052": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230492682s Jul 1 00:56:58.796: INFO: Pod "pod-1cd9a012-ff78-4bd4-9707-78a1368b8052": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.233749044s STEP: Saw pod success Jul 1 00:56:58.796: INFO: Pod "pod-1cd9a012-ff78-4bd4-9707-78a1368b8052" satisfied condition "Succeeded or Failed" Jul 1 00:56:58.798: INFO: Trying to get logs from node latest-worker2 pod pod-1cd9a012-ff78-4bd4-9707-78a1368b8052 container test-container: STEP: delete the pod Jul 1 00:56:58.902: INFO: Waiting for pod pod-1cd9a012-ff78-4bd4-9707-78a1368b8052 to disappear Jul 1 00:56:58.914: INFO: Pod pod-1cd9a012-ff78-4bd4-9707-78a1368b8052 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:56:58.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6256" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":238,"skipped":3985,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:56:58.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-fb3f933e-0a7b-4450-99dc-a9b65e995401 STEP: Creating a pod to test consume configMaps Jul 1 00:56:59.036: INFO: Waiting up to 5m0s for pod "pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244" in namespace "configmap-2873" to be "Succeeded or Failed" Jul 1 00:56:59.054: INFO: Pod "pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244": Phase="Pending", Reason="", readiness=false. Elapsed: 17.93305ms Jul 1 00:57:01.076: INFO: Pod "pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040369189s Jul 1 00:57:03.081: INFO: Pod "pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045217905s STEP: Saw pod success Jul 1 00:57:03.081: INFO: Pod "pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244" satisfied condition "Succeeded or Failed" Jul 1 00:57:03.085: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244 container configmap-volume-test: STEP: delete the pod Jul 1 00:57:03.170: INFO: Waiting for pod pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244 to disappear Jul 1 00:57:03.178: INFO: Pod pod-configmaps-d16c9a52-7bd5-4d3b-899a-ba430f11d244 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:57:03.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2873" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":294,"completed":239,"skipped":3988,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:57:03.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-aafb4ec6-abda-4a5b-bf9d-6518d6c7a383 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:57:03.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9695" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":294,"completed":240,"skipped":4022,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:57:03.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 1 00:57:03.334: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:57:11.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3619" for this suite. • [SLOW TEST:7.892 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":294,"completed":241,"skipped":4031,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:57:11.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 1 00:57:11.263: INFO: Waiting up to 5m0s for pod "pod-3a3e9d74-bc3e-4535-8700-0420cefcb085" in namespace "emptydir-2398" to be "Succeeded or Failed" Jul 1 00:57:11.310: INFO: Pod "pod-3a3e9d74-bc3e-4535-8700-0420cefcb085": Phase="Pending", Reason="", readiness=false. Elapsed: 47.186009ms Jul 1 00:57:13.343: INFO: Pod "pod-3a3e9d74-bc3e-4535-8700-0420cefcb085": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079694102s Jul 1 00:57:15.348: INFO: Pod "pod-3a3e9d74-bc3e-4535-8700-0420cefcb085": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085002548s STEP: Saw pod success Jul 1 00:57:15.348: INFO: Pod "pod-3a3e9d74-bc3e-4535-8700-0420cefcb085" satisfied condition "Succeeded or Failed" Jul 1 00:57:15.351: INFO: Trying to get logs from node latest-worker2 pod pod-3a3e9d74-bc3e-4535-8700-0420cefcb085 container test-container: STEP: delete the pod Jul 1 00:57:15.372: INFO: Waiting for pod pod-3a3e9d74-bc3e-4535-8700-0420cefcb085 to disappear Jul 1 00:57:15.392: INFO: Pod pod-3a3e9d74-bc3e-4535-8700-0420cefcb085 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:57:15.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2398" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":242,"skipped":4034,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:57:15.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-8859 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8859 to expose endpoints map[] Jul 1 00:57:15.526: INFO: Get endpoints failed (19.378649ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 1 00:57:16.532: INFO: successfully validated that service endpoint-test2 in namespace services-8859 exposes endpoints map[] (1.025065748s elapsed) STEP: Creating pod pod1 in namespace services-8859 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8859 to expose endpoints map[pod1:[80]] Jul 1 00:57:20.075: INFO: successfully validated that service endpoint-test2 in namespace services-8859 exposes endpoints map[pod1:[80]] (3.511490185s elapsed) STEP: Creating pod pod2 in namespace services-8859 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8859 to expose endpoints map[pod1:[80] pod2:[80]] Jul 1 00:57:23.270: INFO: successfully validated that service endpoint-test2 in namespace services-8859 exposes endpoints map[pod1:[80] pod2:[80]] (3.190362245s elapsed) STEP: Deleting pod pod1 in namespace services-8859 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8859 to expose endpoints map[pod2:[80]] Jul 1 00:57:24.471: INFO: successfully validated that service endpoint-test2 in namespace services-8859 exposes endpoints map[pod2:[80]] (1.197389966s elapsed) STEP: Deleting pod pod2 in namespace services-8859 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8859 to expose endpoints map[] Jul 1 00:57:25.671: INFO: successfully validated that service endpoint-test2 in namespace services-8859 exposes endpoints map[] (1.166785672s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:57:25.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8859" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:10.299 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":294,"completed":243,"skipped":4046,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:57:25.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 00:57:33.800: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 00:57:33.826: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 00:57:35.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 00:57:35.830: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 00:57:37.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 00:57:37.830: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 00:57:39.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 00:57:39.831: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 00:57:41.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 00:57:41.832: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 00:57:43.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 00:57:43.831: INFO: Pod pod-with-prestop-exec-hook still exists Jul 1 00:57:45.826: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 1 00:57:45.831: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:57:45.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-987" for this suite. • [SLOW TEST:20.148 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":294,"completed":244,"skipped":4054,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:57:45.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 00:57:46.117: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5c57570c-6918-47c2-93b2-72275ecfb20d", Controller:(*bool)(0xc0012259d2), BlockOwnerDeletion:(*bool)(0xc0012259d3)}} Jul 1 00:57:46.174: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"79791a93-e5bf-4319-a28e-a000e6591ec7", Controller:(*bool)(0xc0046dee3a), BlockOwnerDeletion:(*bool)(0xc0046dee3b)}} Jul 1 00:57:46.192: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"886bd611-be1e-4d5a-b5ac-505f1b875805", Controller:(*bool)(0xc0046df05a), BlockOwnerDeletion:(*bool)(0xc0046df05b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:57:51.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4070" for this suite. • [SLOW TEST:5.527 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":294,"completed":245,"skipped":4065,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:57:51.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6144 STEP: creating service affinity-nodeport-transition in namespace services-6144 STEP: creating replication controller affinity-nodeport-transition in namespace services-6144 I0701 00:57:51.855065 8 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6144, replica count: 3 I0701 00:57:54.905512 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:57:57.905815 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:57:57.916: INFO: Creating new exec pod Jul 1 00:58:02.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6144 execpod-affinityz8stv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jul 1 00:58:03.225: INFO: stderr: "I0701 00:58:03.090930 2329 log.go:172] (0xc000758bb0) (0xc000aec460) Create stream\nI0701 00:58:03.090987 2329 log.go:172] (0xc000758bb0) (0xc000aec460) Stream added, broadcasting: 1\nI0701 00:58:03.096058 2329 log.go:172] (0xc000758bb0) Reply frame received for 1\nI0701 00:58:03.096126 2329 log.go:172] (0xc000758bb0) (0xc0005768c0) Create stream\nI0701 00:58:03.096147 2329 log.go:172] (0xc000758bb0) (0xc0005768c0) Stream added, broadcasting: 3\nI0701 00:58:03.097636 2329 log.go:172] (0xc000758bb0) Reply frame received for 3\nI0701 00:58:03.097671 2329 log.go:172] (0xc000758bb0) (0xc0004148c0) Create stream\nI0701 00:58:03.097683 2329 log.go:172] (0xc000758bb0) (0xc0004148c0) Stream added, broadcasting: 5\nI0701 00:58:03.098594 2329 log.go:172] (0xc000758bb0) Reply frame received for 5\nI0701 00:58:03.203990 2329 log.go:172] (0xc000758bb0) Data frame received for 5\nI0701 00:58:03.204023 2329 log.go:172] (0xc0004148c0) (5) Data frame handling\nI0701 00:58:03.204041 2329 log.go:172] (0xc0004148c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0701 00:58:03.216187 2329 log.go:172] (0xc000758bb0) Data frame received for 5\nI0701 00:58:03.216216 2329 log.go:172] (0xc0004148c0) (5) Data frame handling\nI0701 00:58:03.216242 2329 log.go:172] (0xc0004148c0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0701 00:58:03.216445 2329 log.go:172] (0xc000758bb0) Data frame received for 3\nI0701 00:58:03.216464 2329 log.go:172] (0xc0005768c0) (3) Data frame handling\nI0701 00:58:03.216545 2329 log.go:172] (0xc000758bb0) Data frame received for 5\nI0701 00:58:03.216563 2329 log.go:172] (0xc0004148c0) (5) Data frame handling\nI0701 00:58:03.218577 2329 log.go:172] (0xc000758bb0) Data frame received for 1\nI0701 00:58:03.218588 2329 log.go:172] (0xc000aec460) (1) Data frame handling\nI0701 00:58:03.218595 2329 log.go:172] (0xc000aec460) (1) Data frame sent\nI0701 00:58:03.218745 2329 log.go:172] (0xc000758bb0) (0xc000aec460) Stream removed, broadcasting: 1\nI0701 00:58:03.218826 2329 log.go:172] (0xc000758bb0) Go away received\nI0701 00:58:03.219187 2329 log.go:172] (0xc000758bb0) (0xc000aec460) Stream removed, broadcasting: 1\nI0701 00:58:03.219208 2329 log.go:172] (0xc000758bb0) (0xc0005768c0) Stream removed, broadcasting: 3\nI0701 00:58:03.219221 2329 log.go:172] (0xc000758bb0) (0xc0004148c0) Stream removed, broadcasting: 5\n" Jul 1 00:58:03.225: INFO: stdout: "" Jul 1 00:58:03.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6144 execpod-affinityz8stv -- /bin/sh -x -c nc -zv -t -w 2 10.108.169.132 80' Jul 1 00:58:03.439: INFO: stderr: "I0701 00:58:03.353817 2349 log.go:172] (0xc000a85600) (0xc000b46140) Create stream\nI0701 00:58:03.353887 2349 log.go:172] (0xc000a85600) (0xc000b46140) Stream added, broadcasting: 1\nI0701 00:58:03.358680 2349 log.go:172] (0xc000a85600) Reply frame received for 1\nI0701 00:58:03.358749 2349 log.go:172] (0xc000a85600) (0xc000856d20) Create stream\nI0701 00:58:03.358777 2349 log.go:172] (0xc000a85600) (0xc000856d20) Stream added, broadcasting: 3\nI0701 00:58:03.359687 2349 log.go:172] (0xc000a85600) Reply frame received for 3\nI0701 00:58:03.359717 2349 log.go:172] (0xc000a85600) (0xc000544a00) Create stream\nI0701 00:58:03.359725 2349 log.go:172] (0xc000a85600) (0xc000544a00) Stream added, broadcasting: 5\nI0701 00:58:03.360532 2349 log.go:172] (0xc000a85600) Reply frame received for 5\nI0701 00:58:03.431119 2349 log.go:172] (0xc000a85600) Data frame received for 5\nI0701 00:58:03.431177 2349 log.go:172] (0xc000544a00) (5) Data frame handling\nI0701 00:58:03.431200 2349 log.go:172] (0xc000544a00) (5) Data frame sent\nI0701 00:58:03.431219 2349 log.go:172] (0xc000a85600) Data frame received for 5\nI0701 00:58:03.431232 2349 log.go:172] (0xc000544a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.169.132 80\nConnection to 10.108.169.132 80 port [tcp/http] succeeded!\nI0701 00:58:03.431255 2349 log.go:172] (0xc000a85600) Data frame received for 3\nI0701 00:58:03.431285 2349 log.go:172] (0xc000856d20) (3) Data frame handling\nI0701 00:58:03.432446 2349 log.go:172] (0xc000a85600) Data frame received for 1\nI0701 00:58:03.432514 2349 log.go:172] (0xc000b46140) (1) Data frame handling\nI0701 00:58:03.432558 2349 log.go:172] (0xc000b46140) (1) Data frame sent\nI0701 00:58:03.432595 2349 log.go:172] (0xc000a85600) (0xc000b46140) Stream removed, broadcasting: 1\nI0701 00:58:03.432635 2349 log.go:172] (0xc000a85600) Go away received\nI0701 00:58:03.432941 2349 log.go:172] (0xc000a85600) (0xc000b46140) Stream removed, broadcasting: 1\nI0701 00:58:03.432965 2349 log.go:172] (0xc000a85600) (0xc000856d20) Stream removed, broadcasting: 3\nI0701 00:58:03.433004 2349 log.go:172] (0xc000a85600) (0xc000544a00) Stream removed, broadcasting: 5\n" Jul 1 00:58:03.439: INFO: stdout: "" Jul 1 00:58:03.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6144 execpod-affinityz8stv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31501' Jul 1 00:58:03.650: INFO: stderr: "I0701 00:58:03.576065 2368 log.go:172] (0xc0009b7810) (0xc000be6280) Create stream\nI0701 00:58:03.576124 2368 log.go:172] (0xc0009b7810) (0xc000be6280) Stream added, broadcasting: 1\nI0701 00:58:03.580981 2368 log.go:172] (0xc0009b7810) Reply frame received for 1\nI0701 00:58:03.581031 2368 log.go:172] (0xc0009b7810) (0xc0008321e0) Create stream\nI0701 00:58:03.581049 2368 log.go:172] (0xc0009b7810) (0xc0008321e0) Stream added, broadcasting: 3\nI0701 00:58:03.582434 2368 log.go:172] (0xc0009b7810) Reply frame received for 3\nI0701 00:58:03.582487 2368 log.go:172] (0xc0009b7810) (0xc0005c6960) Create stream\nI0701 00:58:03.582517 2368 log.go:172] (0xc0009b7810) (0xc0005c6960) Stream added, broadcasting: 5\nI0701 00:58:03.583408 2368 log.go:172] (0xc0009b7810) Reply frame received for 5\nI0701 00:58:03.639986 2368 log.go:172] (0xc0009b7810) Data frame received for 5\nI0701 00:58:03.640024 2368 log.go:172] (0xc0005c6960) (5) Data frame handling\nI0701 00:58:03.640047 2368 log.go:172] (0xc0005c6960) (5) Data frame sent\nI0701 00:58:03.640065 2368 log.go:172] (0xc0009b7810) Data frame received for 5\nI0701 00:58:03.640092 2368 log.go:172] (0xc0005c6960) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31501\nConnection to 172.17.0.13 31501 port [tcp/31501] succeeded!\nI0701 00:58:03.640174 2368 log.go:172] (0xc0005c6960) (5) Data frame sent\nI0701 00:58:03.640292 2368 log.go:172] (0xc0009b7810) Data frame received for 5\nI0701 00:58:03.640321 2368 log.go:172] (0xc0005c6960) (5) Data frame handling\nI0701 00:58:03.640342 2368 log.go:172] (0xc0009b7810) Data frame received for 3\nI0701 00:58:03.640365 2368 log.go:172] (0xc0008321e0) (3) Data frame handling\nI0701 00:58:03.642052 2368 log.go:172] (0xc0009b7810) Data frame received for 1\nI0701 00:58:03.642095 2368 log.go:172] (0xc000be6280) (1) Data frame handling\nI0701 00:58:03.642114 2368 log.go:172] (0xc000be6280) (1) Data frame sent\nI0701 00:58:03.642132 2368 log.go:172] (0xc0009b7810) (0xc000be6280) Stream removed, broadcasting: 1\nI0701 00:58:03.642152 2368 log.go:172] (0xc0009b7810) Go away received\nI0701 00:58:03.642473 2368 log.go:172] (0xc0009b7810) (0xc000be6280) Stream removed, broadcasting: 1\nI0701 00:58:03.642489 2368 log.go:172] (0xc0009b7810) (0xc0008321e0) Stream removed, broadcasting: 3\nI0701 00:58:03.642497 2368 log.go:172] (0xc0009b7810) (0xc0005c6960) Stream removed, broadcasting: 5\n" Jul 1 00:58:03.650: INFO: stdout: "" Jul 1 00:58:03.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6144 execpod-affinityz8stv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31501' Jul 1 00:58:03.920: INFO: stderr: "I0701 00:58:03.826759 2388 log.go:172] (0xc000ab9550) (0xc000bf88c0) Create stream\nI0701 00:58:03.826809 2388 log.go:172] (0xc000ab9550) (0xc000bf88c0) Stream added, broadcasting: 1\nI0701 00:58:03.829327 2388 log.go:172] (0xc000ab9550) Reply frame received for 1\nI0701 00:58:03.829381 2388 log.go:172] (0xc000ab9550) (0xc000ad0000) Create stream\nI0701 00:58:03.829396 2388 log.go:172] (0xc000ab9550) (0xc000ad0000) Stream added, broadcasting: 3\nI0701 00:58:03.830146 2388 log.go:172] (0xc000ab9550) Reply frame received for 3\nI0701 00:58:03.830176 2388 log.go:172] (0xc000ab9550) (0xc000bf8960) Create stream\nI0701 00:58:03.830185 2388 log.go:172] (0xc000ab9550) (0xc000bf8960) Stream added, broadcasting: 5\nI0701 00:58:03.830938 2388 log.go:172] (0xc000ab9550) Reply frame received for 5\nI0701 00:58:03.911989 2388 log.go:172] (0xc000ab9550) Data frame received for 5\nI0701 00:58:03.912012 2388 log.go:172] (0xc000bf8960) (5) Data frame handling\nI0701 00:58:03.912028 2388 log.go:172] (0xc000bf8960) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31501\nI0701 00:58:03.912063 2388 log.go:172] (0xc000ab9550) Data frame received for 5\nI0701 00:58:03.912076 2388 log.go:172] (0xc000bf8960) (5) Data frame handling\nI0701 00:58:03.912082 2388 log.go:172] (0xc000bf8960) (5) Data frame sent\nConnection to 172.17.0.12 31501 port [tcp/31501] succeeded!\nI0701 00:58:03.912419 2388 log.go:172] (0xc000ab9550) Data frame received for 5\nI0701 00:58:03.912437 2388 log.go:172] (0xc000bf8960) (5) Data frame handling\nI0701 00:58:03.912727 2388 log.go:172] (0xc000ab9550) Data frame received for 3\nI0701 00:58:03.912749 2388 log.go:172] (0xc000ad0000) (3) Data frame handling\nI0701 00:58:03.914295 2388 log.go:172] (0xc000ab9550) Data frame received for 1\nI0701 00:58:03.914323 2388 log.go:172] (0xc000bf88c0) (1) Data frame handling\nI0701 00:58:03.914337 2388 log.go:172] (0xc000bf88c0) (1) Data frame sent\nI0701 00:58:03.914354 2388 log.go:172] (0xc000ab9550) (0xc000bf88c0) Stream removed, broadcasting: 1\nI0701 00:58:03.914382 2388 log.go:172] (0xc000ab9550) Go away received\nI0701 00:58:03.914685 2388 log.go:172] (0xc000ab9550) (0xc000bf88c0) Stream removed, broadcasting: 1\nI0701 00:58:03.914706 2388 log.go:172] (0xc000ab9550) (0xc000ad0000) Stream removed, broadcasting: 3\nI0701 00:58:03.914717 2388 log.go:172] (0xc000ab9550) (0xc000bf8960) Stream removed, broadcasting: 5\n" Jul 1 00:58:03.920: INFO: stdout: "" Jul 1 00:58:03.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6144 execpod-affinityz8stv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31501/ ; done' Jul 1 00:58:04.317: INFO: stderr: "I0701 00:58:04.086481 2408 log.go:172] (0xc0000ef550) (0xc000aec1e0) Create stream\nI0701 00:58:04.086541 2408 log.go:172] (0xc0000ef550) (0xc000aec1e0) Stream added, broadcasting: 1\nI0701 00:58:04.089350 2408 log.go:172] (0xc0000ef550) Reply frame received for 1\nI0701 00:58:04.089405 2408 log.go:172] (0xc0000ef550) (0xc000616d20) Create stream\nI0701 00:58:04.089423 2408 log.go:172] (0xc0000ef550) (0xc000616d20) Stream added, broadcasting: 3\nI0701 00:58:04.090478 2408 log.go:172] (0xc0000ef550) Reply frame received for 3\nI0701 00:58:04.090513 2408 log.go:172] (0xc0000ef550) (0xc000aec280) Create stream\nI0701 00:58:04.090524 2408 log.go:172] (0xc0000ef550) (0xc000aec280) Stream added, broadcasting: 5\nI0701 00:58:04.091494 2408 log.go:172] (0xc0000ef550) Reply frame received for 5\nI0701 00:58:04.165737 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.165798 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.165818 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.165854 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.165864 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.165878 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.188571 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.188603 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.188642 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.188858 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.188896 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.188914 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.188936 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.188948 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.188961 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.196019 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.196038 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.196056 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.196963 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.196993 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.197014 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.197085 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.197102 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.197222 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.203374 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.203402 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.203427 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.204231 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.204247 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.204265 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.204296 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.204312 2408 log.go:172] (0xc000aec280) (5) Data frame sent\nI0701 00:58:04.204326 2408 log.go:172] (0xc000616d20) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.232069 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.232096 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.232116 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.232641 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.232665 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.232677 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.232696 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.232706 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.232716 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.240142 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.240175 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.240191 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.240543 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.240573 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.240593 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.240621 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.240637 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.240655 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.248521 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.248533 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.248539 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.248778 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.248787 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.248798 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.248816 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.248826 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.248857 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.252566 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.252604 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.252651 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.252896 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.252909 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.252938 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.252957 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.252965 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.252970 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.256003 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.256048 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.256086 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.256295 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.256307 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.256313 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.256459 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.256494 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.256530 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.263407 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.263421 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.263435 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.264650 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.264722 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.264740 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.264754 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.264771 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.264780 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.275007 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.275033 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.275056 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.275286 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.275330 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.275345 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.275356 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.275362 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.275370 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.280212 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.280229 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.280243 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.280632 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.280655 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.280666 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.280682 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.280700 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.280721 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.284925 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.284950 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.284973 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.285714 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.285734 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.285762 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.285785 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.285805 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.285814 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.289677 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.289689 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.289696 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.290497 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.290533 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.290552 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.290577 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.290614 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.290644 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.295239 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.295263 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.295271 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.295724 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.295748 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.295761 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.295787 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.295793 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.295799 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.300849 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.300881 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.300911 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.301660 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.301692 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.301719 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.301733 2408 log.go:172] (0xc000aec280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.301749 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.301761 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.305861 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.305876 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.305885 2408 log.go:172] (0xc000616d20) (3) Data frame sent\nI0701 00:58:04.307045 2408 log.go:172] (0xc0000ef550) Data frame received for 3\nI0701 00:58:04.307090 2408 log.go:172] (0xc000616d20) (3) Data frame handling\nI0701 00:58:04.307122 2408 log.go:172] (0xc0000ef550) Data frame received for 5\nI0701 00:58:04.307140 2408 log.go:172] (0xc000aec280) (5) Data frame handling\nI0701 00:58:04.308837 2408 log.go:172] (0xc0000ef550) Data frame received for 1\nI0701 00:58:04.308871 2408 log.go:172] (0xc000aec1e0) (1) Data frame handling\nI0701 00:58:04.308891 2408 log.go:172] (0xc000aec1e0) (1) Data frame sent\nI0701 00:58:04.308915 2408 log.go:172] (0xc0000ef550) (0xc000aec1e0) Stream removed, broadcasting: 1\nI0701 00:58:04.309005 2408 log.go:172] (0xc0000ef550) Go away received\nI0701 00:58:04.309596 2408 log.go:172] (0xc0000ef550) (0xc000aec1e0) Stream removed, broadcasting: 1\nI0701 00:58:04.309624 2408 log.go:172] (0xc0000ef550) (0xc000616d20) Stream removed, broadcasting: 3\nI0701 00:58:04.309635 2408 log.go:172] (0xc0000ef550) (0xc000aec280) Stream removed, broadcasting: 5\n" Jul 1 00:58:04.318: INFO: stdout: "\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-qzk8d\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-fd9sn\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-qzk8d\naffinity-nodeport-transition-qzk8d\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-qzk8d\naffinity-nodeport-transition-fd9sn\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-fd9sn" Jul 1 00:58:04.318: INFO: Received response from host: Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-qzk8d Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-fd9sn Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-qzk8d Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-qzk8d Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-qzk8d Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-fd9sn Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.318: INFO: Received response from host: affinity-nodeport-transition-fd9sn Jul 1 00:58:04.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6144 execpod-affinityz8stv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31501/ ; done' Jul 1 00:58:04.646: INFO: stderr: "I0701 00:58:04.467022 2425 log.go:172] (0xc000be0b00) (0xc00073d720) Create stream\nI0701 00:58:04.467071 2425 log.go:172] (0xc000be0b00) (0xc00073d720) Stream added, broadcasting: 1\nI0701 00:58:04.469047 2425 log.go:172] (0xc000be0b00) Reply frame received for 1\nI0701 00:58:04.469096 2425 log.go:172] (0xc000be0b00) (0xc0006fe640) Create stream\nI0701 00:58:04.469108 2425 log.go:172] (0xc000be0b00) (0xc0006fe640) Stream added, broadcasting: 3\nI0701 00:58:04.470003 2425 log.go:172] (0xc000be0b00) Reply frame received for 3\nI0701 00:58:04.470042 2425 log.go:172] (0xc000be0b00) (0xc0006f0dc0) Create stream\nI0701 00:58:04.470055 2425 log.go:172] (0xc000be0b00) (0xc0006f0dc0) Stream added, broadcasting: 5\nI0701 00:58:04.471007 2425 log.go:172] (0xc000be0b00) Reply frame received for 5\nI0701 00:58:04.549782 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.549808 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.549816 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.549838 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.549843 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.549852 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.553494 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.553529 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.553558 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.553924 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.553945 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.553952 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.553972 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.554003 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.554034 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.557588 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.557608 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.557625 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.557697 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.557708 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.557718 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.557730 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.557745 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.557760 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.560976 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.560991 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.561008 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.561622 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.561643 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.561662 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0701 00:58:04.561883 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.561902 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.561919 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.561946 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.561958 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n 2 http://172.17.0.13:31501/\nI0701 00:58:04.561972 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.567142 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.567164 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.567185 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.567606 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.567628 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.567640 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -sI0701 00:58:04.567694 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.567716 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.567732 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.567860 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.567877 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.567895 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.572133 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.572148 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.572160 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.572626 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.572651 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.572663 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.572680 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.572689 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.572699 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\nI0701 00:58:04.572710 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.572719 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.572739 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\nI0701 00:58:04.579803 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.579838 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.579860 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.580370 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.580390 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.580401 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\nI0701 00:58:04.580410 2425 log.go:172] (0xc000be0b00) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0701 00:58:04.580426 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.580458 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n http://172.17.0.13:31501/\nI0701 00:58:04.580475 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.580501 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.580520 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.585893 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.585922 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.585946 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.586316 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.586332 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.586345 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.586360 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.586374 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.586388 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.590342 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.590372 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.590406 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.590727 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.590740 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.590753 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0701 00:58:04.590763 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.590792 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.590802 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.590816 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.590841 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n http://172.17.0.13:31501/\nI0701 00:58:04.590865 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.595661 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.595689 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.595709 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.596141 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.596155 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.596162 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.596172 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.596178 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.596188 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.601050 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.601068 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.601088 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.601846 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.601876 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.601888 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.601905 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.601917 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.601929 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0701 00:58:04.601941 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.601963 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.601983 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n 2 http://172.17.0.13:31501/\nI0701 00:58:04.606004 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.606029 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.606052 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.607178 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.607222 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.607254 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0701 00:58:04.607289 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.607309 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.607331 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.607355 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n http://172.17.0.13:31501/\nI0701 00:58:04.607387 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.607420 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.611833 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.611848 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.611855 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.612288 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.612299 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.612304 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.612490 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.612500 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.612505 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.617379 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.617401 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.617410 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.617899 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.617928 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.617945 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.617964 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.617974 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.617985 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.625502 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.625542 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.625574 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.625891 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.625915 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.625944 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0701 00:58:04.625962 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.626020 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.626052 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n http://172.17.0.13:31501/\nI0701 00:58:04.626079 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.626093 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.626107 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.632084 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.632107 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.632125 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.632862 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.632892 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.632923 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.632945 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.632974 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.632990 2425 log.go:172] (0xc0006f0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31501/\nI0701 00:58:04.637807 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.637833 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.637858 2425 log.go:172] (0xc0006fe640) (3) Data frame sent\nI0701 00:58:04.638417 2425 log.go:172] (0xc000be0b00) Data frame received for 5\nI0701 00:58:04.638442 2425 log.go:172] (0xc0006f0dc0) (5) Data frame handling\nI0701 00:58:04.638466 2425 log.go:172] (0xc000be0b00) Data frame received for 3\nI0701 00:58:04.638482 2425 log.go:172] (0xc0006fe640) (3) Data frame handling\nI0701 00:58:04.640532 2425 log.go:172] (0xc000be0b00) Data frame received for 1\nI0701 00:58:04.640558 2425 log.go:172] (0xc00073d720) (1) Data frame handling\nI0701 00:58:04.640582 2425 log.go:172] (0xc00073d720) (1) Data frame sent\nI0701 00:58:04.640601 2425 log.go:172] (0xc000be0b00) (0xc00073d720) Stream removed, broadcasting: 1\nI0701 00:58:04.640629 2425 log.go:172] (0xc000be0b00) Go away received\nI0701 00:58:04.640991 2425 log.go:172] (0xc000be0b00) (0xc00073d720) Stream removed, broadcasting: 1\nI0701 00:58:04.641014 2425 log.go:172] (0xc000be0b00) (0xc0006fe640) Stream removed, broadcasting: 3\nI0701 00:58:04.641026 2425 log.go:172] (0xc000be0b00) (0xc0006f0dc0) Stream removed, broadcasting: 5\n" Jul 1 00:58:04.647: INFO: stdout: "\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96\naffinity-nodeport-transition-plc96" Jul 1 00:58:04.647: INFO: Received response from host: Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Received response from host: affinity-nodeport-transition-plc96 Jul 1 00:58:04.647: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6144, will wait for the garbage collector to delete the pods Jul 1 00:58:04.777: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.816436ms Jul 1 00:58:05.278: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.226625ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:58:15.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6144" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:23.654 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":246,"skipped":4080,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:58:15.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 1 00:58:15.166: INFO: Waiting up to 5m0s for pod "downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c" in namespace "downward-api-8371" to be "Succeeded or Failed" Jul 1 00:58:15.182: INFO: Pod "downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.53762ms Jul 1 00:58:17.236: INFO: Pod "downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069639804s Jul 1 00:58:19.240: INFO: Pod "downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073771123s STEP: Saw pod success Jul 1 00:58:19.240: INFO: Pod "downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c" satisfied condition "Succeeded or Failed" Jul 1 00:58:19.243: INFO: Trying to get logs from node latest-worker2 pod downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c container dapi-container: STEP: delete the pod Jul 1 00:58:19.286: INFO: Waiting for pod downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c to disappear Jul 1 00:58:19.295: INFO: Pod downward-api-0c12141e-8226-4b04-9f8d-aa07ff670d5c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:58:19.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8371" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":294,"completed":247,"skipped":4080,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:58:19.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:58:19.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4125" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":294,"completed":248,"skipped":4091,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:58:19.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-e6e5ba2e-8e09-4ca1-acbd-785517fbad7c in namespace container-probe-1471 Jul 1 00:58:23.576: INFO: Started pod busybox-e6e5ba2e-8e09-4ca1-acbd-785517fbad7c in namespace container-probe-1471 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 00:58:23.579: INFO: Initial restart count of pod busybox-e6e5ba2e-8e09-4ca1-acbd-785517fbad7c is 0 Jul 1 00:59:11.690: INFO: Restart count of pod container-probe-1471/busybox-e6e5ba2e-8e09-4ca1-acbd-785517fbad7c is now 1 (48.110099975s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:59:11.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1471" for this suite. • [SLOW TEST:52.267 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":294,"completed":249,"skipped":4092,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:59:11.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Jul 1 00:59:11.832: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:59:11.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4169" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":294,"completed":250,"skipped":4094,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:59:11.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3420 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3420 I0701 00:59:12.184638 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3420, replica count: 2 I0701 00:59:15.235071 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:59:18.235299 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:59:18.235: INFO: Creating new exec pod Jul 1 00:59:23.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3420 execpodklt4z -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jul 1 00:59:23.528: INFO: stderr: "I0701 00:59:23.401598 2463 log.go:172] (0xc000896160) (0xc000887860) Create stream\nI0701 00:59:23.401651 2463 log.go:172] (0xc000896160) (0xc000887860) Stream added, broadcasting: 1\nI0701 00:59:23.404747 2463 log.go:172] (0xc000896160) Reply frame received for 1\nI0701 00:59:23.404811 2463 log.go:172] (0xc000896160) (0xc0006b0a00) Create stream\nI0701 00:59:23.404832 2463 log.go:172] (0xc000896160) (0xc0006b0a00) Stream added, broadcasting: 3\nI0701 00:59:23.406028 2463 log.go:172] (0xc000896160) Reply frame received for 3\nI0701 00:59:23.406094 2463 log.go:172] (0xc000896160) (0xc000564460) Create stream\nI0701 00:59:23.406122 2463 log.go:172] (0xc000896160) (0xc000564460) Stream added, broadcasting: 5\nI0701 00:59:23.407191 2463 log.go:172] (0xc000896160) Reply frame received for 5\nI0701 00:59:23.512138 2463 log.go:172] (0xc000896160) Data frame received for 5\nI0701 00:59:23.512164 2463 log.go:172] (0xc000564460) (5) Data frame handling\nI0701 00:59:23.512181 2463 log.go:172] (0xc000564460) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0701 00:59:23.519357 2463 log.go:172] (0xc000896160) Data frame received for 5\nI0701 00:59:23.519398 2463 log.go:172] (0xc000564460) (5) Data frame handling\nI0701 00:59:23.519436 2463 log.go:172] (0xc000564460) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0701 00:59:23.519702 2463 log.go:172] (0xc000896160) Data frame received for 5\nI0701 00:59:23.519726 2463 log.go:172] (0xc000564460) (5) Data frame handling\nI0701 00:59:23.519740 2463 log.go:172] (0xc000896160) Data frame received for 3\nI0701 00:59:23.519746 2463 log.go:172] (0xc0006b0a00) (3) Data frame handling\nI0701 00:59:23.521684 2463 log.go:172] (0xc000896160) Data frame received for 1\nI0701 00:59:23.521713 2463 log.go:172] (0xc000887860) (1) Data frame handling\nI0701 00:59:23.521727 2463 log.go:172] (0xc000887860) (1) Data frame sent\nI0701 00:59:23.521742 2463 log.go:172] (0xc000896160) (0xc000887860) Stream removed, broadcasting: 1\nI0701 00:59:23.521757 2463 log.go:172] (0xc000896160) Go away received\nI0701 00:59:23.522060 2463 log.go:172] (0xc000896160) (0xc000887860) Stream removed, broadcasting: 1\nI0701 00:59:23.522073 2463 log.go:172] (0xc000896160) (0xc0006b0a00) Stream removed, broadcasting: 3\nI0701 00:59:23.522078 2463 log.go:172] (0xc000896160) (0xc000564460) Stream removed, broadcasting: 5\n" Jul 1 00:59:23.528: INFO: stdout: "" Jul 1 00:59:23.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3420 execpodklt4z -- /bin/sh -x -c nc -zv -t -w 2 10.105.193.136 80' Jul 1 00:59:23.752: INFO: stderr: "I0701 00:59:23.675509 2483 log.go:172] (0xc00003bad0) (0xc0006f4960) Create stream\nI0701 00:59:23.675551 2483 log.go:172] (0xc00003bad0) (0xc0006f4960) Stream added, broadcasting: 1\nI0701 00:59:23.678327 2483 log.go:172] (0xc00003bad0) Reply frame received for 1\nI0701 00:59:23.678372 2483 log.go:172] (0xc00003bad0) (0xc0006a2e60) Create stream\nI0701 00:59:23.678388 2483 log.go:172] (0xc00003bad0) (0xc0006a2e60) Stream added, broadcasting: 3\nI0701 00:59:23.679490 2483 log.go:172] (0xc00003bad0) Reply frame received for 3\nI0701 00:59:23.679546 2483 log.go:172] (0xc00003bad0) (0xc0006f4fa0) Create stream\nI0701 00:59:23.679559 2483 log.go:172] (0xc00003bad0) (0xc0006f4fa0) Stream added, broadcasting: 5\nI0701 00:59:23.680614 2483 log.go:172] (0xc00003bad0) Reply frame received for 5\nI0701 00:59:23.743010 2483 log.go:172] (0xc00003bad0) Data frame received for 3\nI0701 00:59:23.743058 2483 log.go:172] (0xc0006a2e60) (3) Data frame handling\nI0701 00:59:23.743091 2483 log.go:172] (0xc00003bad0) Data frame received for 5\nI0701 00:59:23.743109 2483 log.go:172] (0xc0006f4fa0) (5) Data frame handling\nI0701 00:59:23.743127 2483 log.go:172] (0xc0006f4fa0) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.193.136 80\nConnection to 10.105.193.136 80 port [tcp/http] succeeded!\nI0701 00:59:23.743381 2483 log.go:172] (0xc00003bad0) Data frame received for 5\nI0701 00:59:23.743414 2483 log.go:172] (0xc0006f4fa0) (5) Data frame handling\nI0701 00:59:23.744808 2483 log.go:172] (0xc00003bad0) Data frame received for 1\nI0701 00:59:23.744829 2483 log.go:172] (0xc0006f4960) (1) Data frame handling\nI0701 00:59:23.744838 2483 log.go:172] (0xc0006f4960) (1) Data frame sent\nI0701 00:59:23.744854 2483 log.go:172] (0xc00003bad0) (0xc0006f4960) Stream removed, broadcasting: 1\nI0701 00:59:23.744895 2483 log.go:172] (0xc00003bad0) Go away received\nI0701 00:59:23.745473 2483 log.go:172] (0xc00003bad0) (0xc0006f4960) Stream removed, broadcasting: 1\nI0701 00:59:23.745498 2483 log.go:172] (0xc00003bad0) (0xc0006a2e60) Stream removed, broadcasting: 3\nI0701 00:59:23.745516 2483 log.go:172] (0xc00003bad0) (0xc0006f4fa0) Stream removed, broadcasting: 5\n" Jul 1 00:59:23.752: INFO: stdout: "" Jul 1 00:59:23.752: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:59:23.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3420" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:11.881 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":294,"completed":251,"skipped":4148,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:59:23.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jul 1 00:59:23.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5251' Jul 1 00:59:24.223: INFO: stderr: "" Jul 1 00:59:24.223: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 00:59:24.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5251' Jul 1 00:59:24.378: INFO: stderr: "" Jul 1 00:59:24.378: INFO: stdout: "update-demo-nautilus-4qnk5 update-demo-nautilus-9shv9 " Jul 1 00:59:24.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qnk5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5251' Jul 1 00:59:24.488: INFO: stderr: "" Jul 1 00:59:24.488: INFO: stdout: "" Jul 1 00:59:24.488: INFO: update-demo-nautilus-4qnk5 is created but not running Jul 1 00:59:29.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5251' Jul 1 00:59:29.739: INFO: stderr: "" Jul 1 00:59:29.739: INFO: stdout: "update-demo-nautilus-4qnk5 update-demo-nautilus-9shv9 " Jul 1 00:59:29.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qnk5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5251' Jul 1 00:59:29.880: INFO: stderr: "" Jul 1 00:59:29.880: INFO: stdout: "true" Jul 1 00:59:29.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qnk5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5251' Jul 1 00:59:30.108: INFO: stderr: "" Jul 1 00:59:30.108: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 00:59:30.108: INFO: validating pod update-demo-nautilus-4qnk5 Jul 1 00:59:30.125: INFO: got data: { "image": "nautilus.jpg" } Jul 1 00:59:30.125: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 00:59:30.125: INFO: update-demo-nautilus-4qnk5 is verified up and running Jul 1 00:59:30.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9shv9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5251' Jul 1 00:59:30.231: INFO: stderr: "" Jul 1 00:59:30.231: INFO: stdout: "true" Jul 1 00:59:30.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9shv9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5251' Jul 1 00:59:30.340: INFO: stderr: "" Jul 1 00:59:30.340: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 00:59:30.340: INFO: validating pod update-demo-nautilus-9shv9 Jul 1 00:59:30.355: INFO: got data: { "image": "nautilus.jpg" } Jul 1 00:59:30.355: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 00:59:30.355: INFO: update-demo-nautilus-9shv9 is verified up and running STEP: using delete to clean up resources Jul 1 00:59:30.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5251' Jul 1 00:59:30.465: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 00:59:30.465: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 1 00:59:30.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5251' Jul 1 00:59:30.565: INFO: stderr: "No resources found in kubectl-5251 namespace.\n" Jul 1 00:59:30.565: INFO: stdout: "" Jul 1 00:59:30.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5251 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 00:59:30.713: INFO: stderr: "" Jul 1 00:59:30.713: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:59:30.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5251" for this suite. • [SLOW TEST:6.976 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":294,"completed":252,"skipped":4157,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:59:30.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 00:59:35.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5011" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":253,"skipped":4163,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 00:59:35.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6702 Jul 1 00:59:39.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jul 1 00:59:39.450: INFO: stderr: "I0701 00:59:39.354601 2726 log.go:172] (0xc000c31760) (0xc000702320) Create stream\nI0701 00:59:39.354657 2726 log.go:172] (0xc000c31760) (0xc000702320) Stream added, broadcasting: 1\nI0701 00:59:39.356741 2726 log.go:172] (0xc000c31760) Reply frame received for 1\nI0701 00:59:39.356771 2726 log.go:172] (0xc000c31760) (0xc00070ed20) Create stream\nI0701 00:59:39.356780 2726 log.go:172] (0xc000c31760) (0xc00070ed20) Stream added, broadcasting: 3\nI0701 00:59:39.357937 2726 log.go:172] (0xc000c31760) Reply frame received for 3\nI0701 00:59:39.357962 2726 log.go:172] (0xc000c31760) (0xc000ae2140) Create stream\nI0701 00:59:39.357971 2726 log.go:172] (0xc000c31760) (0xc000ae2140) Stream added, broadcasting: 5\nI0701 00:59:39.359033 2726 log.go:172] (0xc000c31760) Reply frame received for 5\nI0701 00:59:39.438602 2726 log.go:172] (0xc000c31760) Data frame received for 5\nI0701 00:59:39.438625 2726 log.go:172] (0xc000ae2140) (5) Data frame handling\nI0701 00:59:39.438642 2726 log.go:172] (0xc000ae2140) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0701 00:59:39.443325 2726 log.go:172] (0xc000c31760) Data frame received for 3\nI0701 00:59:39.443357 2726 log.go:172] (0xc00070ed20) (3) Data frame handling\nI0701 00:59:39.443384 2726 log.go:172] (0xc00070ed20) (3) Data frame sent\nI0701 00:59:39.443822 2726 log.go:172] (0xc000c31760) Data frame received for 5\nI0701 00:59:39.443858 2726 log.go:172] (0xc000ae2140) (5) Data frame handling\nI0701 00:59:39.443884 2726 log.go:172] (0xc000c31760) Data frame received for 3\nI0701 00:59:39.443901 2726 log.go:172] (0xc00070ed20) (3) Data frame handling\nI0701 00:59:39.445207 2726 log.go:172] (0xc000c31760) Data frame received for 1\nI0701 00:59:39.445236 2726 log.go:172] (0xc000702320) (1) Data frame handling\nI0701 00:59:39.445242 2726 log.go:172] (0xc000702320) (1) Data frame sent\nI0701 00:59:39.445310 2726 log.go:172] (0xc000c31760) (0xc000702320) Stream removed, broadcasting: 1\nI0701 00:59:39.445379 2726 log.go:172] (0xc000c31760) Go away received\nI0701 00:59:39.445528 2726 log.go:172] (0xc000c31760) (0xc000702320) Stream removed, broadcasting: 1\nI0701 00:59:39.445537 2726 log.go:172] (0xc000c31760) (0xc00070ed20) Stream removed, broadcasting: 3\nI0701 00:59:39.445542 2726 log.go:172] (0xc000c31760) (0xc000ae2140) Stream removed, broadcasting: 5\n" Jul 1 00:59:39.450: INFO: stdout: "iptables" Jul 1 00:59:39.450: INFO: proxyMode: iptables Jul 1 00:59:39.455: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:59:39.467: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:59:41.467: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:59:41.471: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:59:43.467: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:59:43.471: INFO: Pod kube-proxy-mode-detector still exists Jul 1 00:59:45.467: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jul 1 00:59:45.470: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6702 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6702 I0701 00:59:45.572787 8 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6702, replica count: 3 I0701 00:59:48.623279 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 00:59:51.623556 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 00:59:51.636: INFO: Creating new exec pod Jul 1 00:59:56.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 execpod-affinityx45xb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jul 1 00:59:56.954: INFO: stderr: "I0701 00:59:56.858987 2746 log.go:172] (0xc00003b760) (0xc000850d20) Create stream\nI0701 00:59:56.859034 2746 log.go:172] (0xc00003b760) (0xc000850d20) Stream added, broadcasting: 1\nI0701 00:59:56.862745 2746 log.go:172] (0xc00003b760) Reply frame received for 1\nI0701 00:59:56.862779 2746 log.go:172] (0xc00003b760) (0xc000845900) Create stream\nI0701 00:59:56.862786 2746 log.go:172] (0xc00003b760) (0xc000845900) Stream added, broadcasting: 3\nI0701 00:59:56.863447 2746 log.go:172] (0xc00003b760) Reply frame received for 3\nI0701 00:59:56.863471 2746 log.go:172] (0xc00003b760) (0xc00083c960) Create stream\nI0701 00:59:56.863478 2746 log.go:172] (0xc00003b760) (0xc00083c960) Stream added, broadcasting: 5\nI0701 00:59:56.864060 2746 log.go:172] (0xc00003b760) Reply frame received for 5\nI0701 00:59:56.943959 2746 log.go:172] (0xc00003b760) Data frame received for 5\nI0701 00:59:56.944004 2746 log.go:172] (0xc00083c960) (5) Data frame handling\nI0701 00:59:56.944047 2746 log.go:172] (0xc00083c960) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0701 00:59:56.944275 2746 log.go:172] (0xc00003b760) Data frame received for 5\nI0701 00:59:56.944326 2746 log.go:172] (0xc00083c960) (5) Data frame handling\nI0701 00:59:56.944348 2746 log.go:172] (0xc00083c960) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0701 00:59:56.944683 2746 log.go:172] (0xc00003b760) Data frame received for 3\nI0701 00:59:56.944712 2746 log.go:172] (0xc000845900) (3) Data frame handling\nI0701 00:59:56.944992 2746 log.go:172] (0xc00003b760) Data frame received for 5\nI0701 00:59:56.945026 2746 log.go:172] (0xc00083c960) (5) Data frame handling\nI0701 00:59:56.947385 2746 log.go:172] (0xc00003b760) Data frame received for 1\nI0701 00:59:56.947410 2746 log.go:172] (0xc000850d20) (1) Data frame handling\nI0701 00:59:56.947427 2746 log.go:172] (0xc000850d20) (1) Data frame sent\nI0701 00:59:56.947454 2746 log.go:172] (0xc00003b760) (0xc000850d20) Stream removed, broadcasting: 1\nI0701 00:59:56.947487 2746 log.go:172] (0xc00003b760) Go away received\nI0701 00:59:56.947998 2746 log.go:172] (0xc00003b760) (0xc000850d20) Stream removed, broadcasting: 1\nI0701 00:59:56.948024 2746 log.go:172] (0xc00003b760) (0xc000845900) Stream removed, broadcasting: 3\nI0701 00:59:56.948037 2746 log.go:172] (0xc00003b760) (0xc00083c960) Stream removed, broadcasting: 5\n" Jul 1 00:59:56.954: INFO: stdout: "" Jul 1 00:59:56.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 execpod-affinityx45xb -- /bin/sh -x -c nc -zv -t -w 2 10.109.218.103 80' Jul 1 00:59:57.159: INFO: stderr: "I0701 00:59:57.087457 2769 log.go:172] (0xc0000e9a20) (0xc0004a7360) Create stream\nI0701 00:59:57.087515 2769 log.go:172] (0xc0000e9a20) (0xc0004a7360) Stream added, broadcasting: 1\nI0701 00:59:57.090533 2769 log.go:172] (0xc0000e9a20) Reply frame received for 1\nI0701 00:59:57.090578 2769 log.go:172] (0xc0000e9a20) (0xc00068e140) Create stream\nI0701 00:59:57.090588 2769 log.go:172] (0xc0000e9a20) (0xc00068e140) Stream added, broadcasting: 3\nI0701 00:59:57.091704 2769 log.go:172] (0xc0000e9a20) Reply frame received for 3\nI0701 00:59:57.091743 2769 log.go:172] (0xc0000e9a20) (0xc0006d7720) Create stream\nI0701 00:59:57.091756 2769 log.go:172] (0xc0000e9a20) (0xc0006d7720) Stream added, broadcasting: 5\nI0701 00:59:57.092852 2769 log.go:172] (0xc0000e9a20) Reply frame received for 5\nI0701 00:59:57.150936 2769 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0701 00:59:57.150966 2769 log.go:172] (0xc0006d7720) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.218.103 80\nConnection to 10.109.218.103 80 port [tcp/http] succeeded!\nI0701 00:59:57.150986 2769 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0701 00:59:57.151032 2769 log.go:172] (0xc00068e140) (3) Data frame handling\nI0701 00:59:57.151059 2769 log.go:172] (0xc0006d7720) (5) Data frame sent\nI0701 00:59:57.151076 2769 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0701 00:59:57.151090 2769 log.go:172] (0xc0006d7720) (5) Data frame handling\nI0701 00:59:57.152468 2769 log.go:172] (0xc0000e9a20) Data frame received for 1\nI0701 00:59:57.152497 2769 log.go:172] (0xc0004a7360) (1) Data frame handling\nI0701 00:59:57.152529 2769 log.go:172] (0xc0004a7360) (1) Data frame sent\nI0701 00:59:57.152572 2769 log.go:172] (0xc0000e9a20) (0xc0004a7360) Stream removed, broadcasting: 1\nI0701 00:59:57.152599 2769 log.go:172] (0xc0000e9a20) Go away received\nI0701 00:59:57.153035 2769 log.go:172] (0xc0000e9a20) (0xc0004a7360) Stream removed, broadcasting: 1\nI0701 00:59:57.153057 2769 log.go:172] (0xc0000e9a20) (0xc00068e140) Stream removed, broadcasting: 3\nI0701 00:59:57.153076 2769 log.go:172] (0xc0000e9a20) (0xc0006d7720) Stream removed, broadcasting: 5\n" Jul 1 00:59:57.159: INFO: stdout: "" Jul 1 00:59:57.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 execpod-affinityx45xb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32606' Jul 1 00:59:57.462: INFO: stderr: "I0701 00:59:57.296559 2790 log.go:172] (0xc000585130) (0xc000bd23c0) Create stream\nI0701 00:59:57.296627 2790 log.go:172] (0xc000585130) (0xc000bd23c0) Stream added, broadcasting: 1\nI0701 00:59:57.302861 2790 log.go:172] (0xc000585130) Reply frame received for 1\nI0701 00:59:57.302924 2790 log.go:172] (0xc000585130) (0xc000830140) Create stream\nI0701 00:59:57.302937 2790 log.go:172] (0xc000585130) (0xc000830140) Stream added, broadcasting: 3\nI0701 00:59:57.304165 2790 log.go:172] (0xc000585130) Reply frame received for 3\nI0701 00:59:57.304224 2790 log.go:172] (0xc000585130) (0xc000717540) Create stream\nI0701 00:59:57.304239 2790 log.go:172] (0xc000585130) (0xc000717540) Stream added, broadcasting: 5\nI0701 00:59:57.305342 2790 log.go:172] (0xc000585130) Reply frame received for 5\nI0701 00:59:57.454116 2790 log.go:172] (0xc000585130) Data frame received for 3\nI0701 00:59:57.454144 2790 log.go:172] (0xc000830140) (3) Data frame handling\nI0701 00:59:57.454160 2790 log.go:172] (0xc000585130) Data frame received for 5\nI0701 00:59:57.454165 2790 log.go:172] (0xc000717540) (5) Data frame handling\nI0701 00:59:57.454190 2790 log.go:172] (0xc000717540) (5) Data frame sent\nI0701 00:59:57.454196 2790 log.go:172] (0xc000585130) Data frame received for 5\nI0701 00:59:57.454207 2790 log.go:172] (0xc000717540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32606\nConnection to 172.17.0.13 32606 port [tcp/32606] succeeded!\nI0701 00:59:57.455545 2790 log.go:172] (0xc000585130) Data frame received for 1\nI0701 00:59:57.455584 2790 log.go:172] (0xc000bd23c0) (1) Data frame handling\nI0701 00:59:57.455609 2790 log.go:172] (0xc000bd23c0) (1) Data frame sent\nI0701 00:59:57.455629 2790 log.go:172] (0xc000585130) (0xc000bd23c0) Stream removed, broadcasting: 1\nI0701 00:59:57.455648 2790 log.go:172] (0xc000585130) Go away received\nI0701 00:59:57.456188 2790 log.go:172] (0xc000585130) (0xc000bd23c0) Stream removed, broadcasting: 1\nI0701 00:59:57.456219 2790 log.go:172] (0xc000585130) (0xc000830140) Stream removed, broadcasting: 3\nI0701 00:59:57.456240 2790 log.go:172] (0xc000585130) (0xc000717540) Stream removed, broadcasting: 5\n" Jul 1 00:59:57.462: INFO: stdout: "" Jul 1 00:59:57.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 execpod-affinityx45xb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32606' Jul 1 00:59:57.646: INFO: stderr: "I0701 00:59:57.584545 2811 log.go:172] (0xc000b4f080) (0xc0009e6320) Create stream\nI0701 00:59:57.584587 2811 log.go:172] (0xc000b4f080) (0xc0009e6320) Stream added, broadcasting: 1\nI0701 00:59:57.587914 2811 log.go:172] (0xc000b4f080) Reply frame received for 1\nI0701 00:59:57.587944 2811 log.go:172] (0xc000b4f080) (0xc0002a9a40) Create stream\nI0701 00:59:57.587953 2811 log.go:172] (0xc000b4f080) (0xc0002a9a40) Stream added, broadcasting: 3\nI0701 00:59:57.588850 2811 log.go:172] (0xc000b4f080) Reply frame received for 3\nI0701 00:59:57.588880 2811 log.go:172] (0xc000b4f080) (0xc0006b6be0) Create stream\nI0701 00:59:57.588889 2811 log.go:172] (0xc000b4f080) (0xc0006b6be0) Stream added, broadcasting: 5\nI0701 00:59:57.589951 2811 log.go:172] (0xc000b4f080) Reply frame received for 5\nI0701 00:59:57.636988 2811 log.go:172] (0xc000b4f080) Data frame received for 3\nI0701 00:59:57.637045 2811 log.go:172] (0xc0002a9a40) (3) Data frame handling\nI0701 00:59:57.637086 2811 log.go:172] (0xc000b4f080) Data frame received for 5\nI0701 00:59:57.637270 2811 log.go:172] (0xc0006b6be0) (5) Data frame handling\nI0701 00:59:57.637305 2811 log.go:172] (0xc0006b6be0) (5) Data frame sent\nI0701 00:59:57.637331 2811 log.go:172] (0xc000b4f080) Data frame received for 5\nI0701 00:59:57.637348 2811 log.go:172] (0xc0006b6be0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 32606\nConnection to 172.17.0.12 32606 port [tcp/32606] succeeded!\nI0701 00:59:57.638570 2811 log.go:172] (0xc000b4f080) Data frame received for 1\nI0701 00:59:57.638586 2811 log.go:172] (0xc0009e6320) (1) Data frame handling\nI0701 00:59:57.638596 2811 log.go:172] (0xc0009e6320) (1) Data frame sent\nI0701 00:59:57.638609 2811 log.go:172] (0xc000b4f080) (0xc0009e6320) Stream removed, broadcasting: 1\nI0701 00:59:57.638624 2811 log.go:172] (0xc000b4f080) Go away received\nI0701 00:59:57.639099 2811 log.go:172] (0xc000b4f080) (0xc0009e6320) Stream removed, broadcasting: 1\nI0701 00:59:57.639116 2811 log.go:172] (0xc000b4f080) (0xc0002a9a40) Stream removed, broadcasting: 3\nI0701 00:59:57.639125 2811 log.go:172] (0xc000b4f080) (0xc0006b6be0) Stream removed, broadcasting: 5\n" Jul 1 00:59:57.646: INFO: stdout: "" Jul 1 00:59:57.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 execpod-affinityx45xb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32606/ ; done' Jul 1 00:59:57.923: INFO: stderr: "I0701 00:59:57.783506 2831 log.go:172] (0xc000ac9290) (0xc000b38280) Create stream\nI0701 00:59:57.783548 2831 log.go:172] (0xc000ac9290) (0xc000b38280) Stream added, broadcasting: 1\nI0701 00:59:57.786748 2831 log.go:172] (0xc000ac9290) Reply frame received for 1\nI0701 00:59:57.786791 2831 log.go:172] (0xc000ac9290) (0xc0007045a0) Create stream\nI0701 00:59:57.786801 2831 log.go:172] (0xc000ac9290) (0xc0007045a0) Stream added, broadcasting: 3\nI0701 00:59:57.787526 2831 log.go:172] (0xc000ac9290) Reply frame received for 3\nI0701 00:59:57.787558 2831 log.go:172] (0xc000ac9290) (0xc0004ea960) Create stream\nI0701 00:59:57.787570 2831 log.go:172] (0xc000ac9290) (0xc0004ea960) Stream added, broadcasting: 5\nI0701 00:59:57.788333 2831 log.go:172] (0xc000ac9290) Reply frame received for 5\nI0701 00:59:57.830738 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.830779 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.830793 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.830832 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.830853 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.830870 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.839430 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.839453 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.839467 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.839690 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.839708 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.839721 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.839734 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.839743 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.839762 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.842883 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.842909 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.842926 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.844052 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.844074 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.844083 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.844105 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.844140 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.844164 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.851041 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.851056 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.851066 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.851579 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.851605 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.851622 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.851641 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.851648 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.851659 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.855872 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.855898 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.855913 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.856346 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.856385 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.856405 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.856425 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.856438 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.856458 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.860694 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.860711 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.860724 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.861707 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.861736 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.861768 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.861831 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.861849 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.861861 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.865633 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.865652 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.865666 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.865937 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.865947 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.865953 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.865969 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.865988 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.866000 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.870099 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.870115 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.870130 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.870467 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.870481 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.870490 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.870496 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.870506 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.870520 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.874683 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.874701 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.874715 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.875105 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.875136 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.875147 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.875174 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.875196 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.875219 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.880715 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.880735 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.880742 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.881335 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.881398 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.881421 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.881716 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.881731 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.881743 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.885075 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.885092 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.885108 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.885691 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.885721 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.885734 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.885753 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.885764 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.885774 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.890397 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.890411 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.890421 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.890881 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.890907 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.890918 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.890947 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.890966 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.890974 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.895311 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.895322 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.895333 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.895736 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.895747 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.895752 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.895910 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.895933 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.895951 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.900743 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.900758 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.900765 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.901825 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.901861 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.901878 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.901905 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.901922 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.901946 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.904735 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.904751 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.904762 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.905845 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.905879 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.905904 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.905941 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.905954 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.905973 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.908910 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.908921 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.908928 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.909888 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.909906 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.909917 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.909928 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.909936 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.909951 2831 log.go:172] (0xc0004ea960) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:57.913822 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.913853 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.913895 2831 log.go:172] (0xc0007045a0) (3) Data frame sent\nI0701 00:59:57.914553 2831 log.go:172] (0xc000ac9290) Data frame received for 5\nI0701 00:59:57.914576 2831 log.go:172] (0xc0004ea960) (5) Data frame handling\nI0701 00:59:57.914631 2831 log.go:172] (0xc000ac9290) Data frame received for 3\nI0701 00:59:57.914665 2831 log.go:172] (0xc0007045a0) (3) Data frame handling\nI0701 00:59:57.916373 2831 log.go:172] (0xc000ac9290) Data frame received for 1\nI0701 00:59:57.916391 2831 log.go:172] (0xc000b38280) (1) Data frame handling\nI0701 00:59:57.916400 2831 log.go:172] (0xc000b38280) (1) Data frame sent\nI0701 00:59:57.916510 2831 log.go:172] (0xc000ac9290) (0xc000b38280) Stream removed, broadcasting: 1\nI0701 00:59:57.916727 2831 log.go:172] (0xc000ac9290) Go away received\nI0701 00:59:57.916927 2831 log.go:172] (0xc000ac9290) (0xc000b38280) Stream removed, broadcasting: 1\nI0701 00:59:57.916946 2831 log.go:172] (0xc000ac9290) (0xc0007045a0) Stream removed, broadcasting: 3\nI0701 00:59:57.916958 2831 log.go:172] (0xc000ac9290) (0xc0004ea960) Stream removed, broadcasting: 5\n" Jul 1 00:59:57.924: INFO: stdout: "\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw\naffinity-nodeport-timeout-pg4mw" Jul 1 00:59:57.924: INFO: Received response from host: Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Received response from host: affinity-nodeport-timeout-pg4mw Jul 1 00:59:57.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 execpod-affinityx45xb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32606/' Jul 1 00:59:58.135: INFO: stderr: "I0701 00:59:58.056557 2852 log.go:172] (0xc000a0d130) (0xc000c62320) Create stream\nI0701 00:59:58.056611 2852 log.go:172] (0xc000a0d130) (0xc000c62320) Stream added, broadcasting: 1\nI0701 00:59:58.062100 2852 log.go:172] (0xc000a0d130) Reply frame received for 1\nI0701 00:59:58.062134 2852 log.go:172] (0xc000a0d130) (0xc0006ce140) Create stream\nI0701 00:59:58.062144 2852 log.go:172] (0xc000a0d130) (0xc0006ce140) Stream added, broadcasting: 3\nI0701 00:59:58.063256 2852 log.go:172] (0xc000a0d130) Reply frame received for 3\nI0701 00:59:58.063315 2852 log.go:172] (0xc000a0d130) (0xc00065eb40) Create stream\nI0701 00:59:58.063340 2852 log.go:172] (0xc000a0d130) (0xc00065eb40) Stream added, broadcasting: 5\nI0701 00:59:58.064270 2852 log.go:172] (0xc000a0d130) Reply frame received for 5\nI0701 00:59:58.123834 2852 log.go:172] (0xc000a0d130) Data frame received for 3\nI0701 00:59:58.123891 2852 log.go:172] (0xc0006ce140) (3) Data frame handling\nI0701 00:59:58.123916 2852 log.go:172] (0xc0006ce140) (3) Data frame sent\nI0701 00:59:58.123967 2852 log.go:172] (0xc000a0d130) Data frame received for 5\nI0701 00:59:58.123991 2852 log.go:172] (0xc00065eb40) (5) Data frame handling\nI0701 00:59:58.124022 2852 log.go:172] (0xc00065eb40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 00:59:58.124745 2852 log.go:172] (0xc000a0d130) Data frame received for 5\nI0701 00:59:58.124844 2852 log.go:172] (0xc00065eb40) (5) Data frame handling\nI0701 00:59:58.124979 2852 log.go:172] (0xc000a0d130) Data frame received for 3\nI0701 00:59:58.124995 2852 log.go:172] (0xc0006ce140) (3) Data frame handling\nI0701 00:59:58.130068 2852 log.go:172] (0xc000a0d130) Data frame received for 1\nI0701 00:59:58.130090 2852 log.go:172] (0xc000c62320) (1) Data frame handling\nI0701 00:59:58.130112 2852 log.go:172] (0xc000c62320) (1) Data frame sent\nI0701 00:59:58.130126 2852 log.go:172] (0xc000a0d130) (0xc000c62320) Stream removed, broadcasting: 1\nI0701 00:59:58.130140 2852 log.go:172] (0xc000a0d130) Go away received\nI0701 00:59:58.130689 2852 log.go:172] (0xc000a0d130) (0xc000c62320) Stream removed, broadcasting: 1\nI0701 00:59:58.130709 2852 log.go:172] (0xc000a0d130) (0xc0006ce140) Stream removed, broadcasting: 3\nI0701 00:59:58.130717 2852 log.go:172] (0xc000a0d130) (0xc00065eb40) Stream removed, broadcasting: 5\n" Jul 1 00:59:58.135: INFO: stdout: "affinity-nodeport-timeout-pg4mw" Jul 1 01:00:13.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6702 execpod-affinityx45xb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32606/' Jul 1 01:00:13.405: INFO: stderr: "I0701 01:00:13.262109 2874 log.go:172] (0xc000451a20) (0xc0004e6be0) Create stream\nI0701 01:00:13.262162 2874 log.go:172] (0xc000451a20) (0xc0004e6be0) Stream added, broadcasting: 1\nI0701 01:00:13.265611 2874 log.go:172] (0xc000451a20) Reply frame received for 1\nI0701 01:00:13.265644 2874 log.go:172] (0xc000451a20) (0xc000434140) Create stream\nI0701 01:00:13.265652 2874 log.go:172] (0xc000451a20) (0xc000434140) Stream added, broadcasting: 3\nI0701 01:00:13.266382 2874 log.go:172] (0xc000451a20) Reply frame received for 3\nI0701 01:00:13.266421 2874 log.go:172] (0xc000451a20) (0xc0004e68c0) Create stream\nI0701 01:00:13.266433 2874 log.go:172] (0xc000451a20) (0xc0004e68c0) Stream added, broadcasting: 5\nI0701 01:00:13.267093 2874 log.go:172] (0xc000451a20) Reply frame received for 5\nI0701 01:00:13.363275 2874 log.go:172] (0xc000451a20) Data frame received for 5\nI0701 01:00:13.363296 2874 log.go:172] (0xc0004e68c0) (5) Data frame handling\nI0701 01:00:13.363308 2874 log.go:172] (0xc0004e68c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32606/\nI0701 01:00:13.394037 2874 log.go:172] (0xc000451a20) Data frame received for 3\nI0701 01:00:13.394068 2874 log.go:172] (0xc000434140) (3) Data frame handling\nI0701 01:00:13.394089 2874 log.go:172] (0xc000434140) (3) Data frame sent\nI0701 01:00:13.394998 2874 log.go:172] (0xc000451a20) Data frame received for 5\nI0701 01:00:13.395018 2874 log.go:172] (0xc0004e68c0) (5) Data frame handling\nI0701 01:00:13.395059 2874 log.go:172] (0xc000451a20) Data frame received for 3\nI0701 01:00:13.395093 2874 log.go:172] (0xc000434140) (3) Data frame handling\nI0701 01:00:13.397225 2874 log.go:172] (0xc000451a20) Data frame received for 1\nI0701 01:00:13.397247 2874 log.go:172] (0xc0004e6be0) (1) Data frame handling\nI0701 01:00:13.397255 2874 log.go:172] (0xc0004e6be0) (1) Data frame sent\nI0701 01:00:13.397264 2874 log.go:172] (0xc000451a20) (0xc0004e6be0) Stream removed, broadcasting: 1\nI0701 01:00:13.397419 2874 log.go:172] (0xc000451a20) Go away received\nI0701 01:00:13.397536 2874 log.go:172] (0xc000451a20) (0xc0004e6be0) Stream removed, broadcasting: 1\nI0701 01:00:13.397551 2874 log.go:172] (0xc000451a20) (0xc000434140) Stream removed, broadcasting: 3\nI0701 01:00:13.397561 2874 log.go:172] (0xc000451a20) (0xc0004e68c0) Stream removed, broadcasting: 5\n" Jul 1 01:00:13.405: INFO: stdout: "affinity-nodeport-timeout-dzt4n" Jul 1 01:00:13.405: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6702, will wait for the garbage collector to delete the pods Jul 1 01:00:13.538: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 27.541388ms Jul 1 01:00:13.939: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.303376ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:00:25.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6702" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:50.411 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":254,"skipped":4163,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:00:25.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Jul 1 01:00:25.488: INFO: Waiting up to 5m0s for pod "downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d" in namespace "downward-api-1314" to be "Succeeded or Failed" Jul 1 01:00:25.516: INFO: Pod "downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.359393ms Jul 1 01:00:27.520: INFO: Pod "downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031611294s Jul 1 01:00:29.524: INFO: Pod "downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d": Phase="Running", Reason="", readiness=true. Elapsed: 4.035293128s Jul 1 01:00:31.528: INFO: Pod "downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039911653s STEP: Saw pod success Jul 1 01:00:31.528: INFO: Pod "downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d" satisfied condition "Succeeded or Failed" Jul 1 01:00:31.531: INFO: Trying to get logs from node latest-worker2 pod downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d container dapi-container: STEP: delete the pod Jul 1 01:00:31.565: INFO: Waiting for pod downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d to disappear Jul 1 01:00:31.614: INFO: Pod downward-api-9c86d7e6-8d12-49ee-8d5b-a7209770e49d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:00:31.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1314" for this suite. • [SLOW TEST:6.192 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":294,"completed":255,"skipped":4168,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:00:31.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 1 01:00:31.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:31.784: INFO: Number of nodes with available pods: 0 Jul 1 01:00:31.784: INFO: Node latest-worker is running more than one daemon pod Jul 1 01:00:32.789: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:32.794: INFO: Number of nodes with available pods: 0 Jul 1 01:00:32.794: INFO: Node latest-worker is running more than one daemon pod Jul 1 01:00:33.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:33.840: INFO: Number of nodes with available pods: 0 Jul 1 01:00:33.840: INFO: Node latest-worker is running more than one daemon pod Jul 1 01:00:34.897: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:34.918: INFO: Number of nodes with available pods: 0 Jul 1 01:00:34.918: INFO: Node latest-worker is running more than one daemon pod Jul 1 01:00:35.849: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:35.854: INFO: Number of nodes with available pods: 1 Jul 1 01:00:35.854: INFO: Node latest-worker2 is running more than one daemon pod Jul 1 01:00:36.793: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:36.796: INFO: Number of nodes with available pods: 2 Jul 1 01:00:36.796: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 1 01:00:36.884: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:36.907: INFO: Number of nodes with available pods: 1 Jul 1 01:00:36.907: INFO: Node latest-worker2 is running more than one daemon pod Jul 1 01:00:37.982: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:37.986: INFO: Number of nodes with available pods: 1 Jul 1 01:00:37.986: INFO: Node latest-worker2 is running more than one daemon pod Jul 1 01:00:38.912: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:38.915: INFO: Number of nodes with available pods: 1 Jul 1 01:00:38.915: INFO: Node latest-worker2 is running more than one daemon pod Jul 1 01:00:39.912: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:39.915: INFO: Number of nodes with available pods: 1 Jul 1 01:00:39.915: INFO: Node latest-worker2 is running more than one daemon pod Jul 1 01:00:40.913: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 1 01:00:40.918: INFO: Number of nodes with available pods: 2 Jul 1 01:00:40.918: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6345, will wait for the garbage collector to delete the pods Jul 1 01:00:40.984: INFO: Deleting DaemonSet.extensions daemon-set took: 7.098694ms Jul 1 01:00:41.284: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.362957ms Jul 1 01:00:55.288: INFO: Number of nodes with available pods: 0 Jul 1 01:00:55.288: INFO: Number of running nodes: 0, number of available pods: 0 Jul 1 01:00:55.291: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6345/daemonsets","resourceVersion":"17259711"},"items":null} Jul 1 01:00:55.295: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6345/pods","resourceVersion":"17259711"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:00:55.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6345" for this suite. • [SLOW TEST:23.716 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":294,"completed":256,"skipped":4177,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:00:55.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9049/configmap-test-a2bc27d0-71d3-4a5a-a6bf-a7910fa553b3 STEP: Creating a pod to test consume configMaps Jul 1 01:00:55.521: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b" in namespace "configmap-9049" to be "Succeeded or Failed" Jul 1 01:00:55.527: INFO: Pod "pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.608552ms Jul 1 01:00:57.554: INFO: Pod "pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032470127s Jul 1 01:00:59.559: INFO: Pod "pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037344988s STEP: Saw pod success Jul 1 01:00:59.559: INFO: Pod "pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b" satisfied condition "Succeeded or Failed" Jul 1 01:00:59.562: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b container env-test: STEP: delete the pod Jul 1 01:00:59.603: INFO: Waiting for pod pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b to disappear Jul 1 01:00:59.608: INFO: Pod pod-configmaps-7f041b75-c85f-4068-aef4-1bfdf61df46b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:00:59.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9049" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":294,"completed":257,"skipped":4187,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:00:59.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:00:59.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5340" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":294,"completed":258,"skipped":4223,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:00:59.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2455.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2455.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2455.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 1 01:01:05.888: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.892: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.895: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.897: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.907: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.910: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.914: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.916: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:05.923: INFO: Lookups using dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local] Jul 1 01:01:10.928: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.933: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.936: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.940: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.950: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.953: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.955: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.958: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:10.964: INFO: Lookups using dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local] Jul 1 01:01:15.929: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.934: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.937: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.942: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.951: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.954: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.958: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.961: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:15.968: INFO: Lookups using dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local] Jul 1 01:01:20.928: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.932: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.935: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.939: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.949: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.952: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.956: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.959: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:20.966: INFO: Lookups using dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local] Jul 1 01:01:25.927: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.931: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.934: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.938: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.948: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.950: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.953: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.956: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:25.961: INFO: Lookups using dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local] Jul 1 01:01:30.927: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.930: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.932: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.935: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.942: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.944: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.946: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.949: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local from pod dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762: the server could not find the requested resource (get pods dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762) Jul 1 01:01:30.955: INFO: Lookups using dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2455.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2455.svc.cluster.local jessie_udp@dns-test-service-2.dns-2455.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2455.svc.cluster.local] Jul 1 01:01:35.974: INFO: DNS probes using dns-2455/dns-test-457bb35d-d801-4b6e-ba58-d438c3a15762 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:01:36.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2455" for this suite. • [SLOW TEST:36.914 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":294,"completed":259,"skipped":4262,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:01:36.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Jul 1 01:01:36.735: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jul 1 01:01:36.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1265' Jul 1 01:01:37.077: INFO: stderr: "" Jul 1 01:01:37.077: INFO: stdout: "service/agnhost-slave created\n" Jul 1 01:01:37.077: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jul 1 01:01:37.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1265' Jul 1 01:01:37.402: INFO: stderr: "" Jul 1 01:01:37.402: INFO: stdout: "service/agnhost-master created\n" Jul 1 01:01:37.402: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 1 01:01:37.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1265' Jul 1 01:01:37.808: INFO: stderr: "" Jul 1 01:01:37.808: INFO: stdout: "service/frontend created\n" Jul 1 01:01:37.808: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jul 1 01:01:37.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1265' Jul 1 01:01:38.120: INFO: stderr: "" Jul 1 01:01:38.120: INFO: stdout: "deployment.apps/frontend created\n" Jul 1 01:01:38.121: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 1 01:01:38.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1265' Jul 1 01:01:38.736: INFO: stderr: "" Jul 1 01:01:38.736: INFO: stdout: "deployment.apps/agnhost-master created\n" Jul 1 01:01:38.736: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 1 01:01:38.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1265' Jul 1 01:01:39.198: INFO: stderr: "" Jul 1 01:01:39.198: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jul 1 01:01:39.198: INFO: Waiting for all frontend pods to be Running. Jul 1 01:01:49.249: INFO: Waiting for frontend to serve content. Jul 1 01:01:49.260: INFO: Trying to add a new entry to the guestbook. Jul 1 01:01:49.270: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 1 01:01:49.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1265' Jul 1 01:01:49.430: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 01:01:49.430: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jul 1 01:01:49.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1265' Jul 1 01:01:49.619: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 01:01:49.619: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 01:01:49.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1265' Jul 1 01:01:49.826: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 01:01:49.826: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 01:01:49.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1265' Jul 1 01:01:49.953: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 01:01:49.954: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 1 01:01:49.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1265' Jul 1 01:01:50.090: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 01:01:50.090: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jul 1 01:01:50.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1265' Jul 1 01:01:50.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 01:01:50.379: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:01:50.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1265" for this suite. • [SLOW TEST:13.835 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":294,"completed":260,"skipped":4274,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:01:50.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 01:01:52.732: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 01:01:54.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162112, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162112, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162112, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162112, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 01:01:57.980: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:01:58.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5030" for this suite. STEP: Destroying namespace "webhook-5030-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.895 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":294,"completed":261,"skipped":4297,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:01:58.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 1 01:01:59.825: INFO: Pod name wrapped-volume-race-c24d8dff-0c86-4083-b1c1-f9918b9609f7: Found 0 pods out of 5 Jul 1 01:02:04.834: INFO: Pod name wrapped-volume-race-c24d8dff-0c86-4083-b1c1-f9918b9609f7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c24d8dff-0c86-4083-b1c1-f9918b9609f7 in namespace emptydir-wrapper-2704, will wait for the garbage collector to delete the pods Jul 1 01:02:19.217: INFO: Deleting ReplicationController wrapped-volume-race-c24d8dff-0c86-4083-b1c1-f9918b9609f7 took: 20.424476ms Jul 1 01:02:19.518: INFO: Terminating ReplicationController wrapped-volume-race-c24d8dff-0c86-4083-b1c1-f9918b9609f7 pods took: 300.315192ms STEP: Creating RC which spawns configmap-volume pods Jul 1 01:02:35.253: INFO: Pod name wrapped-volume-race-98b4eee0-dc99-4d73-9a91-2ff8c9be77f5: Found 0 pods out of 5 Jul 1 01:02:40.260: INFO: Pod name wrapped-volume-race-98b4eee0-dc99-4d73-9a91-2ff8c9be77f5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-98b4eee0-dc99-4d73-9a91-2ff8c9be77f5 in namespace emptydir-wrapper-2704, will wait for the garbage collector to delete the pods Jul 1 01:02:54.350: INFO: Deleting ReplicationController wrapped-volume-race-98b4eee0-dc99-4d73-9a91-2ff8c9be77f5 took: 6.313327ms Jul 1 01:02:54.650: INFO: Terminating ReplicationController wrapped-volume-race-98b4eee0-dc99-4d73-9a91-2ff8c9be77f5 pods took: 300.250057ms STEP: Creating RC which spawns configmap-volume pods Jul 1 01:03:05.128: INFO: Pod name wrapped-volume-race-b87e3d72-db48-4da8-ac21-0decf28261cb: Found 0 pods out of 5 Jul 1 01:03:10.137: INFO: Pod name wrapped-volume-race-b87e3d72-db48-4da8-ac21-0decf28261cb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b87e3d72-db48-4da8-ac21-0decf28261cb in namespace emptydir-wrapper-2704, will wait for the garbage collector to delete the pods Jul 1 01:03:24.295: INFO: Deleting ReplicationController wrapped-volume-race-b87e3d72-db48-4da8-ac21-0decf28261cb took: 64.398899ms Jul 1 01:03:24.595: INFO: Terminating ReplicationController wrapped-volume-race-b87e3d72-db48-4da8-ac21-0decf28261cb pods took: 300.273474ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:03:36.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2704" for this suite. • [SLOW TEST:97.695 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":294,"completed":262,"skipped":4322,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:03:36.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Jul 1 01:03:36.168: INFO: PodSpec: initContainers in spec.initContainers Jul 1 01:04:24.779: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c7fadb80-488a-4304-bf07-536bf25289fb", GenerateName:"", Namespace:"init-container-3700", SelfLink:"/api/v1/namespaces/init-container-3700/pods/pod-init-c7fadb80-488a-4304-bf07-536bf25289fb", UID:"6a4f8e18-ed0e-4d27-a27c-03eaed1991a0", ResourceVersion:"17261564", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729162216, loc:(*time.Location)(0x80643c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"168234624"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c8320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c8440)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0049c8560), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0049c8700)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zwp6j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0066f6000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zwp6j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zwp6j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zwp6j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00623c098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f02070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00623c120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00623c140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00623c148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00623c14c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162216, loc:(*time.Location)(0x80643c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162216, loc:(*time.Location)(0x80643c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162216, loc:(*time.Location)(0x80643c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162216, loc:(*time.Location)(0x80643c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.2.25", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.25"}}, StartTime:(*v1.Time)(0xc0049c8840), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0049c8a80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f021c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://fe1a05414652621d8bfc0a5a621d49d3f910512268b86ee7e8fdfc0c8ed60d98", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0049c8c20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0049c8960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00623c1ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:04:24.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3700" for this suite. • [SLOW TEST:48.750 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":294,"completed":263,"skipped":4339,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:04:24.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 01:04:24.931: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c183e931-abf8-4f9c-b80f-86f63ffd362a" in namespace "security-context-test-3240" to be "Succeeded or Failed" Jul 1 01:04:24.961: INFO: Pod "busybox-privileged-false-c183e931-abf8-4f9c-b80f-86f63ffd362a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.457278ms Jul 1 01:04:26.967: INFO: Pod "busybox-privileged-false-c183e931-abf8-4f9c-b80f-86f63ffd362a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035685838s Jul 1 01:04:28.971: INFO: Pod "busybox-privileged-false-c183e931-abf8-4f9c-b80f-86f63ffd362a": Phase="Running", Reason="", readiness=true. Elapsed: 4.039940386s Jul 1 01:04:30.975: INFO: Pod "busybox-privileged-false-c183e931-abf8-4f9c-b80f-86f63ffd362a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044356793s Jul 1 01:04:30.975: INFO: Pod "busybox-privileged-false-c183e931-abf8-4f9c-b80f-86f63ffd362a" satisfied condition "Succeeded or Failed" Jul 1 01:04:30.997: INFO: Got logs for pod "busybox-privileged-false-c183e931-abf8-4f9c-b80f-86f63ffd362a": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:04:30.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3240" for this suite. • [SLOW TEST:6.189 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":264,"skipped":4356,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:04:31.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 1 01:04:31.749: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 1 01:04:33.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162271, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162271, loc:(*time.Location)(0x80643c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162271, loc:(*time.Location)(0x80643c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729162271, loc:(*time.Location)(0x80643c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 1 01:04:36.912: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:04:36.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4545" for this suite. STEP: Destroying namespace "webhook-4545-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.005 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":294,"completed":265,"skipped":4371,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:04:37.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7ba1b9a3-0956-4867-8c67-88f9facf2224 STEP: Creating a pod to test consume configMaps Jul 1 01:04:37.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9" in namespace "projected-5856" to be "Succeeded or Failed" Jul 1 01:04:37.098: INFO: Pod "pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016051ms Jul 1 01:04:39.102: INFO: Pod "pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008502789s Jul 1 01:04:41.107: INFO: Pod "pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9": Phase="Running", Reason="", readiness=true. Elapsed: 4.013089945s Jul 1 01:04:43.112: INFO: Pod "pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017910007s STEP: Saw pod success Jul 1 01:04:43.112: INFO: Pod "pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9" satisfied condition "Succeeded or Failed" Jul 1 01:04:43.116: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9 container projected-configmap-volume-test: STEP: delete the pod Jul 1 01:04:43.148: INFO: Waiting for pod pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9 to disappear Jul 1 01:04:43.152: INFO: Pod pod-projected-configmaps-af9e4e12-6b10-47ef-87af-b92394aeeca9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:04:43.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5856" for this suite. • [SLOW TEST:6.149 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":266,"skipped":4381,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:04:43.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9432 STEP: creating service affinity-nodeport in namespace services-9432 STEP: creating replication controller affinity-nodeport in namespace services-9432 I0701 01:04:43.462300 8 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9432, replica count: 3 I0701 01:04:46.512693 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 01:04:49.512959 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 1 01:04:49.525: INFO: Creating new exec pod Jul 1 01:04:54.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9432 execpod-affinitysw8rs -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jul 1 01:04:54.843: INFO: stderr: "I0701 01:04:54.703087 3138 log.go:172] (0xc00083c000) (0xc0008448c0) Create stream\nI0701 01:04:54.703161 3138 log.go:172] (0xc00083c000) (0xc0008448c0) Stream added, broadcasting: 1\nI0701 01:04:54.704930 3138 log.go:172] (0xc00083c000) Reply frame received for 1\nI0701 01:04:54.704987 3138 log.go:172] (0xc00083c000) (0xc000865a40) Create stream\nI0701 01:04:54.705001 3138 log.go:172] (0xc00083c000) (0xc000865a40) Stream added, broadcasting: 3\nI0701 01:04:54.706065 3138 log.go:172] (0xc00083c000) Reply frame received for 3\nI0701 01:04:54.706095 3138 log.go:172] (0xc00083c000) (0xc000844dc0) Create stream\nI0701 01:04:54.706104 3138 log.go:172] (0xc00083c000) (0xc000844dc0) Stream added, broadcasting: 5\nI0701 01:04:54.706887 3138 log.go:172] (0xc00083c000) Reply frame received for 5\nI0701 01:04:54.833042 3138 log.go:172] (0xc00083c000) Data frame received for 5\nI0701 01:04:54.833074 3138 log.go:172] (0xc000844dc0) (5) Data frame handling\nI0701 01:04:54.833088 3138 log.go:172] (0xc000844dc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0701 01:04:54.834589 3138 log.go:172] (0xc00083c000) Data frame received for 5\nI0701 01:04:54.834618 3138 log.go:172] (0xc000844dc0) (5) Data frame handling\nI0701 01:04:54.834640 3138 log.go:172] (0xc000844dc0) (5) Data frame sent\nI0701 01:04:54.834654 3138 log.go:172] (0xc00083c000) Data frame received for 5\nI0701 01:04:54.834665 3138 log.go:172] (0xc000844dc0) (5) Data frame handling\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0701 01:04:54.834763 3138 log.go:172] (0xc00083c000) Data frame received for 3\nI0701 01:04:54.834780 3138 log.go:172] (0xc000865a40) (3) Data frame handling\nI0701 01:04:54.836718 3138 log.go:172] (0xc00083c000) Data frame received for 1\nI0701 01:04:54.836738 3138 log.go:172] (0xc0008448c0) (1) Data frame handling\nI0701 01:04:54.836750 3138 log.go:172] (0xc0008448c0) (1) Data frame sent\nI0701 01:04:54.836761 3138 log.go:172] (0xc00083c000) (0xc0008448c0) Stream removed, broadcasting: 1\nI0701 01:04:54.836777 3138 log.go:172] (0xc00083c000) Go away received\nI0701 01:04:54.837049 3138 log.go:172] (0xc00083c000) (0xc0008448c0) Stream removed, broadcasting: 1\nI0701 01:04:54.837063 3138 log.go:172] (0xc00083c000) (0xc000865a40) Stream removed, broadcasting: 3\nI0701 01:04:54.837070 3138 log.go:172] (0xc00083c000) (0xc000844dc0) Stream removed, broadcasting: 5\n" Jul 1 01:04:54.843: INFO: stdout: "" Jul 1 01:04:54.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9432 execpod-affinitysw8rs -- /bin/sh -x -c nc -zv -t -w 2 10.102.85.164 80' Jul 1 01:04:55.062: INFO: stderr: "I0701 01:04:54.984366 3157 log.go:172] (0xc0009d5340) (0xc000b0c1e0) Create stream\nI0701 01:04:54.984417 3157 log.go:172] (0xc0009d5340) (0xc000b0c1e0) Stream added, broadcasting: 1\nI0701 01:04:54.989953 3157 log.go:172] (0xc0009d5340) Reply frame received for 1\nI0701 01:04:54.989999 3157 log.go:172] (0xc0009d5340) (0xc0004ca280) Create stream\nI0701 01:04:54.990009 3157 log.go:172] (0xc0009d5340) (0xc0004ca280) Stream added, broadcasting: 3\nI0701 01:04:54.991035 3157 log.go:172] (0xc0009d5340) Reply frame received for 3\nI0701 01:04:54.991061 3157 log.go:172] (0xc0009d5340) (0xc0003f2aa0) Create stream\nI0701 01:04:54.991070 3157 log.go:172] (0xc0009d5340) (0xc0003f2aa0) Stream added, broadcasting: 5\nI0701 01:04:54.992146 3157 log.go:172] (0xc0009d5340) Reply frame received for 5\nI0701 01:04:55.054956 3157 log.go:172] (0xc0009d5340) Data frame received for 3\nI0701 01:04:55.054990 3157 log.go:172] (0xc0004ca280) (3) Data frame handling\nI0701 01:04:55.055011 3157 log.go:172] (0xc0009d5340) Data frame received for 5\nI0701 01:04:55.055018 3157 log.go:172] (0xc0003f2aa0) (5) Data frame handling\nI0701 01:04:55.055028 3157 log.go:172] (0xc0003f2aa0) (5) Data frame sent\nI0701 01:04:55.055035 3157 log.go:172] (0xc0009d5340) Data frame received for 5\nI0701 01:04:55.055043 3157 log.go:172] (0xc0003f2aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.85.164 80\nConnection to 10.102.85.164 80 port [tcp/http] succeeded!\nI0701 01:04:55.056614 3157 log.go:172] (0xc0009d5340) Data frame received for 1\nI0701 01:04:55.056650 3157 log.go:172] (0xc000b0c1e0) (1) Data frame handling\nI0701 01:04:55.056672 3157 log.go:172] (0xc000b0c1e0) (1) Data frame sent\nI0701 01:04:55.056694 3157 log.go:172] (0xc0009d5340) (0xc000b0c1e0) Stream removed, broadcasting: 1\nI0701 01:04:55.056733 3157 log.go:172] (0xc0009d5340) Go away received\nI0701 01:04:55.057079 3157 log.go:172] (0xc0009d5340) (0xc000b0c1e0) Stream removed, broadcasting: 1\nI0701 01:04:55.057098 3157 log.go:172] (0xc0009d5340) (0xc0004ca280) Stream removed, broadcasting: 3\nI0701 01:04:55.057107 3157 log.go:172] (0xc0009d5340) (0xc0003f2aa0) Stream removed, broadcasting: 5\n" Jul 1 01:04:55.063: INFO: stdout: "" Jul 1 01:04:55.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9432 execpod-affinitysw8rs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31083' Jul 1 01:04:55.286: INFO: stderr: "I0701 01:04:55.205009 3177 log.go:172] (0xc000b41130) (0xc000b0c3c0) Create stream\nI0701 01:04:55.205075 3177 log.go:172] (0xc000b41130) (0xc000b0c3c0) Stream added, broadcasting: 1\nI0701 01:04:55.210206 3177 log.go:172] (0xc000b41130) Reply frame received for 1\nI0701 01:04:55.210259 3177 log.go:172] (0xc000b41130) (0xc0006d32c0) Create stream\nI0701 01:04:55.210293 3177 log.go:172] (0xc000b41130) (0xc0006d32c0) Stream added, broadcasting: 3\nI0701 01:04:55.211250 3177 log.go:172] (0xc000b41130) Reply frame received for 3\nI0701 01:04:55.211306 3177 log.go:172] (0xc000b41130) (0xc00040d2c0) Create stream\nI0701 01:04:55.211342 3177 log.go:172] (0xc000b41130) (0xc00040d2c0) Stream added, broadcasting: 5\nI0701 01:04:55.212311 3177 log.go:172] (0xc000b41130) Reply frame received for 5\nI0701 01:04:55.276502 3177 log.go:172] (0xc000b41130) Data frame received for 3\nI0701 01:04:55.276534 3177 log.go:172] (0xc0006d32c0) (3) Data frame handling\nI0701 01:04:55.276568 3177 log.go:172] (0xc000b41130) Data frame received for 5\nI0701 01:04:55.276599 3177 log.go:172] (0xc00040d2c0) (5) Data frame handling\nI0701 01:04:55.276619 3177 log.go:172] (0xc00040d2c0) (5) Data frame sent\nI0701 01:04:55.276631 3177 log.go:172] (0xc000b41130) Data frame received for 5\nI0701 01:04:55.276640 3177 log.go:172] (0xc00040d2c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31083\nConnection to 172.17.0.13 31083 port [tcp/31083] succeeded!\nI0701 01:04:55.278397 3177 log.go:172] (0xc000b41130) Data frame received for 1\nI0701 01:04:55.278418 3177 log.go:172] (0xc000b0c3c0) (1) Data frame handling\nI0701 01:04:55.278439 3177 log.go:172] (0xc000b0c3c0) (1) Data frame sent\nI0701 01:04:55.278565 3177 log.go:172] (0xc000b41130) (0xc000b0c3c0) Stream removed, broadcasting: 1\nI0701 01:04:55.278618 3177 log.go:172] (0xc000b41130) Go away received\nI0701 01:04:55.278864 3177 log.go:172] (0xc000b41130) (0xc000b0c3c0) Stream removed, broadcasting: 1\nI0701 01:04:55.278877 3177 log.go:172] (0xc000b41130) (0xc0006d32c0) Stream removed, broadcasting: 3\nI0701 01:04:55.278883 3177 log.go:172] (0xc000b41130) (0xc00040d2c0) Stream removed, broadcasting: 5\n" Jul 1 01:04:55.286: INFO: stdout: "" Jul 1 01:04:55.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9432 execpod-affinitysw8rs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31083' Jul 1 01:04:55.497: INFO: stderr: "I0701 01:04:55.412438 3197 log.go:172] (0xc0009e0f20) (0xc000a428c0) Create stream\nI0701 01:04:55.412491 3197 log.go:172] (0xc0009e0f20) (0xc000a428c0) Stream added, broadcasting: 1\nI0701 01:04:55.416034 3197 log.go:172] (0xc0009e0f20) Reply frame received for 1\nI0701 01:04:55.416070 3197 log.go:172] (0xc0009e0f20) (0xc0009b2000) Create stream\nI0701 01:04:55.416078 3197 log.go:172] (0xc0009e0f20) (0xc0009b2000) Stream added, broadcasting: 3\nI0701 01:04:55.416948 3197 log.go:172] (0xc0009e0f20) Reply frame received for 3\nI0701 01:04:55.416977 3197 log.go:172] (0xc0009e0f20) (0xc0009b20a0) Create stream\nI0701 01:04:55.416988 3197 log.go:172] (0xc0009e0f20) (0xc0009b20a0) Stream added, broadcasting: 5\nI0701 01:04:55.418089 3197 log.go:172] (0xc0009e0f20) Reply frame received for 5\nI0701 01:04:55.490208 3197 log.go:172] (0xc0009e0f20) Data frame received for 3\nI0701 01:04:55.490253 3197 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0701 01:04:55.490280 3197 log.go:172] (0xc0009e0f20) Data frame received for 5\nI0701 01:04:55.490298 3197 log.go:172] (0xc0009b20a0) (5) Data frame handling\nI0701 01:04:55.490311 3197 log.go:172] (0xc0009b20a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31083\nConnection to 172.17.0.12 31083 port [tcp/31083] succeeded!\nI0701 01:04:55.490325 3197 log.go:172] (0xc0009e0f20) Data frame received for 5\nI0701 01:04:55.490395 3197 log.go:172] (0xc0009b20a0) (5) Data frame handling\nI0701 01:04:55.491703 3197 log.go:172] (0xc0009e0f20) Data frame received for 1\nI0701 01:04:55.491716 3197 log.go:172] (0xc000a428c0) (1) Data frame handling\nI0701 01:04:55.491731 3197 log.go:172] (0xc000a428c0) (1) Data frame sent\nI0701 01:04:55.491743 3197 log.go:172] (0xc0009e0f20) (0xc000a428c0) Stream removed, broadcasting: 1\nI0701 01:04:55.491964 3197 log.go:172] (0xc0009e0f20) Go away received\nI0701 01:04:55.492123 3197 log.go:172] (0xc0009e0f20) (0xc000a428c0) Stream removed, broadcasting: 1\nI0701 01:04:55.492150 3197 log.go:172] (0xc0009e0f20) (0xc0009b2000) Stream removed, broadcasting: 3\nI0701 01:04:55.492166 3197 log.go:172] (0xc0009e0f20) (0xc0009b20a0) Stream removed, broadcasting: 5\n" Jul 1 01:04:55.498: INFO: stdout: "" Jul 1 01:04:55.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9432 execpod-affinitysw8rs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31083/ ; done' Jul 1 01:04:55.792: INFO: stderr: "I0701 01:04:55.632052 3219 log.go:172] (0xc000a4d340) (0xc000b8a3c0) Create stream\nI0701 01:04:55.632114 3219 log.go:172] (0xc000a4d340) (0xc000b8a3c0) Stream added, broadcasting: 1\nI0701 01:04:55.636839 3219 log.go:172] (0xc000a4d340) Reply frame received for 1\nI0701 01:04:55.636978 3219 log.go:172] (0xc000a4d340) (0xc0007743c0) Create stream\nI0701 01:04:55.637030 3219 log.go:172] (0xc000a4d340) (0xc0007743c0) Stream added, broadcasting: 3\nI0701 01:04:55.638234 3219 log.go:172] (0xc000a4d340) Reply frame received for 3\nI0701 01:04:55.638286 3219 log.go:172] (0xc000a4d340) (0xc0005ed2c0) Create stream\nI0701 01:04:55.638302 3219 log.go:172] (0xc000a4d340) (0xc0005ed2c0) Stream added, broadcasting: 5\nI0701 01:04:55.639387 3219 log.go:172] (0xc000a4d340) Reply frame received for 5\nI0701 01:04:55.702907 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.702942 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.702954 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.702972 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.702980 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.702989 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.706851 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.706877 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.706904 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.707737 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.707762 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.707791 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.708074 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.708095 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.708116 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.712205 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.712227 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.712243 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.712759 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.712779 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.712790 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.712816 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.712832 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.712841 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.718434 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.718456 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.718475 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.718786 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.718808 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.718818 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.718826 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.718833 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.718841 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.722679 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.722714 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.722744 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.723007 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.723037 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\n+ echo\nI0701 01:04:55.723058 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.723076 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.723087 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.723100 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\nI0701 01:04:55.723109 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.723117 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.723126 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.726913 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.726942 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.726965 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.727278 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.727300 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.727334 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.727368 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.727381 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.727394 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\nI0701 01:04:55.727408 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.727429 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.727469 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\nI0701 01:04:55.731427 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.731458 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.731485 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.731835 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.731867 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.731881 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.731898 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.731913 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.731923 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.736032 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.736066 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.736107 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.736495 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.736530 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.736548 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.736567 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.736583 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.736603 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.741413 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.741430 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.741440 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.741690 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.741717 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.741755 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.741771 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.741801 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.741827 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.745103 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.745185 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.745214 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.746062 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.746085 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.746100 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.746129 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.746143 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.746157 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.749559 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.749576 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.749589 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.750351 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.750376 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.750412 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.750428 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.750455 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.750491 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.754409 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.754426 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.754436 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.755090 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.755119 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.755134 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.755157 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.755191 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.755219 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.762491 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.762516 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.762544 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.763258 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.763288 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.763303 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.763325 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.763341 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.763353 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.767386 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.767409 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.767426 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.768182 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.768212 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.768238 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.768272 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.768289 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.768306 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.771793 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.771825 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.771860 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.772084 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.772111 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.772130 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.772284 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.772307 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.772327 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.776678 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.776692 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.776705 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.777050 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.777061 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.777069 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.777089 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.777246 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.777277 3219 log.go:172] (0xc0005ed2c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31083/\nI0701 01:04:55.781898 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.781940 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.781977 3219 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0701 01:04:55.782323 3219 log.go:172] (0xc000a4d340) Data frame received for 5\nI0701 01:04:55.782342 3219 log.go:172] (0xc0005ed2c0) (5) Data frame handling\nI0701 01:04:55.782606 3219 log.go:172] (0xc000a4d340) Data frame received for 3\nI0701 01:04:55.782628 3219 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0701 01:04:55.784612 3219 log.go:172] (0xc000a4d340) Data frame received for 1\nI0701 01:04:55.784640 3219 log.go:172] (0xc000b8a3c0) (1) Data frame handling\nI0701 01:04:55.784679 3219 log.go:172] (0xc000b8a3c0) (1) Data frame sent\nI0701 01:04:55.784708 3219 log.go:172] (0xc000a4d340) (0xc000b8a3c0) Stream removed, broadcasting: 1\nI0701 01:04:55.784832 3219 log.go:172] (0xc000a4d340) Go away received\nI0701 01:04:55.785508 3219 log.go:172] (0xc000a4d340) (0xc000b8a3c0) Stream removed, broadcasting: 1\nI0701 01:04:55.785541 3219 log.go:172] (0xc000a4d340) (0xc0007743c0) Stream removed, broadcasting: 3\nI0701 01:04:55.785560 3219 log.go:172] (0xc000a4d340) (0xc0005ed2c0) Stream removed, broadcasting: 5\n" Jul 1 01:04:55.792: INFO: stdout: "\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz\naffinity-nodeport-kj4bz" Jul 1 01:04:55.792: INFO: Received response from host: Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Received response from host: affinity-nodeport-kj4bz Jul 1 01:04:55.793: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9432, will wait for the garbage collector to delete the pods Jul 1 01:04:55.897: INFO: Deleting ReplicationController affinity-nodeport took: 5.765645ms Jul 1 01:04:55.998: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.246793ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:05:05.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9432" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:22.192 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":294,"completed":267,"skipped":4397,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:05:05.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 01:05:05.464: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Pending, waiting for it to be Running (with Ready = true) Jul 1 01:05:07.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Pending, waiting for it to be Running (with Ready = true) Jul 1 01:05:09.479: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:11.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:13.492: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:15.469: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:17.467: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:19.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:21.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:23.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:25.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:27.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:29.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = false) Jul 1 01:05:31.468: INFO: The status of Pod test-webserver-8ec85caf-5366-4a84-82f5-ef9a0c16158d is Running (Ready = true) Jul 1 01:05:31.471: INFO: Container started at 2020-07-01 01:05:08 +0000 UTC, pod became ready at 2020-07-01 01:05:30 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:05:31.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3367" for this suite. • [SLOW TEST:26.127 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":294,"completed":268,"skipped":4408,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:05:31.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 1 01:05:36.722: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:05:36.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7542" for this suite. • [SLOW TEST:5.398 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":294,"completed":269,"skipped":4420,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:05:36.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:05:43.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5671" for this suite. • [SLOW TEST:6.164 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":294,"completed":270,"skipped":4433,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:05:43.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jul 1 01:05:43.325: INFO: >>> kubeConfig: /root/.kube/config Jul 1 01:05:46.254: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:05:56.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-286" for this suite. • [SLOW TEST:13.737 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":294,"completed":271,"skipped":4440,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:05:56.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:06:01.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7864" for this suite. • [SLOW TEST:5.110 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":294,"completed":272,"skipped":4454,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:06:01.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 1 01:06:06.061: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:06:06.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9564" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":294,"completed":273,"skipped":4467,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:06:06.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Jul 1 01:06:12.749: INFO: Successfully updated pod "labelsupdatebc13b1a5-8eac-4a2b-a50e-62b49da036fc" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:06:14.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3631" for this suite. • [SLOW TEST:8.654 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":294,"completed":274,"skipped":4477,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:06:14.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Jul 1 01:06:14.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6667' Jul 1 01:06:15.207: INFO: stderr: "" Jul 1 01:06:15.207: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 01:06:15.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:15.348: INFO: stderr: "" Jul 1 01:06:15.348: INFO: stdout: "update-demo-nautilus-8hdb6 update-demo-nautilus-fdvs7 " Jul 1 01:06:15.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:15.464: INFO: stderr: "" Jul 1 01:06:15.464: INFO: stdout: "" Jul 1 01:06:15.464: INFO: update-demo-nautilus-8hdb6 is created but not running Jul 1 01:06:20.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:20.572: INFO: stderr: "" Jul 1 01:06:20.572: INFO: stdout: "update-demo-nautilus-8hdb6 update-demo-nautilus-fdvs7 " Jul 1 01:06:20.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:20.724: INFO: stderr: "" Jul 1 01:06:20.724: INFO: stdout: "true" Jul 1 01:06:20.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hdb6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:20.831: INFO: stderr: "" Jul 1 01:06:20.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 01:06:20.831: INFO: validating pod update-demo-nautilus-8hdb6 Jul 1 01:06:20.835: INFO: got data: { "image": "nautilus.jpg" } Jul 1 01:06:20.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 01:06:20.835: INFO: update-demo-nautilus-8hdb6 is verified up and running Jul 1 01:06:20.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:20.944: INFO: stderr: "" Jul 1 01:06:20.944: INFO: stdout: "true" Jul 1 01:06:20.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:21.045: INFO: stderr: "" Jul 1 01:06:21.045: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 01:06:21.045: INFO: validating pod update-demo-nautilus-fdvs7 Jul 1 01:06:21.049: INFO: got data: { "image": "nautilus.jpg" } Jul 1 01:06:21.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 01:06:21.049: INFO: update-demo-nautilus-fdvs7 is verified up and running STEP: scaling down the replication controller Jul 1 01:06:21.084: INFO: scanned /root for discovery docs: Jul 1 01:06:21.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6667' Jul 1 01:06:22.207: INFO: stderr: "" Jul 1 01:06:22.207: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 01:06:22.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:22.316: INFO: stderr: "" Jul 1 01:06:22.316: INFO: stdout: "update-demo-nautilus-8hdb6 update-demo-nautilus-fdvs7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 01:06:27.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:27.441: INFO: stderr: "" Jul 1 01:06:27.441: INFO: stdout: "update-demo-nautilus-8hdb6 update-demo-nautilus-fdvs7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 01:06:32.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:32.563: INFO: stderr: "" Jul 1 01:06:32.563: INFO: stdout: "update-demo-nautilus-8hdb6 update-demo-nautilus-fdvs7 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 1 01:06:37.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:37.653: INFO: stderr: "" Jul 1 01:06:37.653: INFO: stdout: "update-demo-nautilus-fdvs7 " Jul 1 01:06:37.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:37.748: INFO: stderr: "" Jul 1 01:06:37.748: INFO: stdout: "true" Jul 1 01:06:37.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:37.855: INFO: stderr: "" Jul 1 01:06:37.855: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 01:06:37.855: INFO: validating pod update-demo-nautilus-fdvs7 Jul 1 01:06:37.858: INFO: got data: { "image": "nautilus.jpg" } Jul 1 01:06:37.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 01:06:37.858: INFO: update-demo-nautilus-fdvs7 is verified up and running STEP: scaling up the replication controller Jul 1 01:06:37.861: INFO: scanned /root for discovery docs: Jul 1 01:06:37.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6667' Jul 1 01:06:39.047: INFO: stderr: "" Jul 1 01:06:39.047: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 1 01:06:39.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:39.157: INFO: stderr: "" Jul 1 01:06:39.157: INFO: stdout: "update-demo-nautilus-fdvs7 update-demo-nautilus-w96g9 " Jul 1 01:06:39.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:39.262: INFO: stderr: "" Jul 1 01:06:39.263: INFO: stdout: "true" Jul 1 01:06:39.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:39.397: INFO: stderr: "" Jul 1 01:06:39.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 01:06:39.397: INFO: validating pod update-demo-nautilus-fdvs7 Jul 1 01:06:39.400: INFO: got data: { "image": "nautilus.jpg" } Jul 1 01:06:39.400: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 01:06:39.400: INFO: update-demo-nautilus-fdvs7 is verified up and running Jul 1 01:06:39.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w96g9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:39.498: INFO: stderr: "" Jul 1 01:06:39.499: INFO: stdout: "" Jul 1 01:06:39.499: INFO: update-demo-nautilus-w96g9 is created but not running Jul 1 01:06:44.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6667' Jul 1 01:06:44.601: INFO: stderr: "" Jul 1 01:06:44.601: INFO: stdout: "update-demo-nautilus-fdvs7 update-demo-nautilus-w96g9 " Jul 1 01:06:44.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:44.699: INFO: stderr: "" Jul 1 01:06:44.699: INFO: stdout: "true" Jul 1 01:06:44.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fdvs7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:44.805: INFO: stderr: "" Jul 1 01:06:44.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 01:06:44.805: INFO: validating pod update-demo-nautilus-fdvs7 Jul 1 01:06:44.809: INFO: got data: { "image": "nautilus.jpg" } Jul 1 01:06:44.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 01:06:44.809: INFO: update-demo-nautilus-fdvs7 is verified up and running Jul 1 01:06:44.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w96g9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:44.905: INFO: stderr: "" Jul 1 01:06:44.905: INFO: stdout: "true" Jul 1 01:06:44.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w96g9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6667' Jul 1 01:06:45.008: INFO: stderr: "" Jul 1 01:06:45.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 1 01:06:45.008: INFO: validating pod update-demo-nautilus-w96g9 Jul 1 01:06:45.012: INFO: got data: { "image": "nautilus.jpg" } Jul 1 01:06:45.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 1 01:06:45.012: INFO: update-demo-nautilus-w96g9 is verified up and running STEP: using delete to clean up resources Jul 1 01:06:45.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6667' Jul 1 01:06:45.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 1 01:06:45.134: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 1 01:06:45.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6667' Jul 1 01:06:45.249: INFO: stderr: "No resources found in kubectl-6667 namespace.\n" Jul 1 01:06:45.249: INFO: stdout: "" Jul 1 01:06:45.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6667 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 01:06:45.359: INFO: stderr: "" Jul 1 01:06:45.359: INFO: stdout: "update-demo-nautilus-fdvs7\nupdate-demo-nautilus-w96g9\n" Jul 1 01:06:45.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6667' Jul 1 01:06:45.971: INFO: stderr: "No resources found in kubectl-6667 namespace.\n" Jul 1 01:06:45.971: INFO: stdout: "" Jul 1 01:06:45.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6667 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 1 01:06:46.064: INFO: stderr: "" Jul 1 01:06:46.064: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:06:46.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6667" for this suite. • [SLOW TEST:31.274 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":294,"completed":275,"skipped":4499,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:06:46.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:06:50.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5350" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":294,"completed":276,"skipped":4506,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:06:50.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-42917c66-6d04-4734-a37a-44843edceb73 STEP: Creating a pod to test consume secrets Jul 1 01:06:50.747: INFO: Waiting up to 5m0s for pod "pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285" in namespace "secrets-8878" to be "Succeeded or Failed" Jul 1 01:06:50.755: INFO: Pod "pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394984ms Jul 1 01:06:53.093: INFO: Pod "pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34624686s Jul 1 01:06:55.097: INFO: Pod "pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.350544537s STEP: Saw pod success Jul 1 01:06:55.098: INFO: Pod "pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285" satisfied condition "Succeeded or Failed" Jul 1 01:06:55.100: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285 container secret-volume-test: STEP: delete the pod Jul 1 01:06:55.374: INFO: Waiting for pod pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285 to disappear Jul 1 01:06:55.479: INFO: Pod pod-secrets-45042fcb-ceef-4d03-b9a1-27cc9c7f9285 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:06:55.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8878" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":294,"completed":277,"skipped":4533,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:06:55.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 1 01:07:03.743: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 01:07:03.755: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 01:07:05.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 01:07:05.761: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 01:07:07.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 01:07:07.760: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 01:07:09.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 01:07:09.760: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 01:07:11.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 01:07:11.761: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 01:07:13.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 01:07:13.761: INFO: Pod pod-with-prestop-http-hook still exists Jul 1 01:07:15.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 1 01:07:15.760: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:07:15.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6334" for this suite. • [SLOW TEST:20.272 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":294,"completed":278,"skipped":4539,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:07:15.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:07:15.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-857" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":294,"completed":279,"skipped":4545,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:07:15.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 01:07:15.925: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:07:16.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9438" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":294,"completed":280,"skipped":4590,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:07:16.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 01:07:16.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5137' Jul 1 01:07:19.755: INFO: stderr: "" Jul 1 01:07:19.755: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jul 1 01:07:19.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5137' Jul 1 01:07:20.159: INFO: stderr: "" Jul 1 01:07:20.159: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 1 01:07:21.175: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 01:07:21.175: INFO: Found 0 / 1 Jul 1 01:07:22.205: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 01:07:22.205: INFO: Found 0 / 1 Jul 1 01:07:23.163: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 01:07:23.163: INFO: Found 0 / 1 Jul 1 01:07:24.164: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 01:07:24.164: INFO: Found 1 / 1 Jul 1 01:07:24.164: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 1 01:07:24.167: INFO: Selector matched 1 pods for map[app:agnhost] Jul 1 01:07:24.167: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 1 01:07:24.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-x57pv --namespace=kubectl-5137' Jul 1 01:07:24.285: INFO: stderr: "" Jul 1 01:07:24.285: INFO: stdout: "Name: agnhost-master-x57pv\nNamespace: kubectl-5137\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Wed, 01 Jul 2020 01:07:19 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.36\nIPs:\n IP: 10.244.2.36\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://863d8f42a61cbc7b981d513030169ee941cb344d272c69a91a532734403d8f10\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 01 Jul 2020 01:07:23 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dctgj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dctgj:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dctgj\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-5137/agnhost-master-x57pv to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Jul 1 01:07:24.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5137' Jul 1 01:07:24.410: INFO: stderr: "" Jul 1 01:07:24.410: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5137\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-x57pv\n" Jul 1 01:07:24.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5137' Jul 1 01:07:24.528: INFO: stderr: "" Jul 1 01:07:24.528: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5137\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.131.32\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.36:6379\nSession Affinity: None\nEvents: \n" Jul 1 01:07:24.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' Jul 1 01:07:24.689: INFO: stderr: "" Jul 1 01:07:24.689: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 01 Jul 2020 01:07:22 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 01 Jul 2020 01:06:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 01 Jul 2020 01:06:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 01 Jul 2020 01:06:25 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 01 Jul 2020 01:06:25 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 62d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 62d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 62d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jul 1 01:07:24.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-5137' Jul 1 01:07:24.805: INFO: stderr: "" Jul 1 01:07:24.805: INFO: stdout: "Name: kubectl-5137\nLabels: e2e-framework=kubectl\n e2e-run=f39acfdd-f386-43a2-964e-ea79da272e01\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:07:24.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5137" for this suite. • [SLOW TEST:8.252 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1088 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":294,"completed":281,"skipped":4601,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:07:24.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 01:07:24.924: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jul 1 01:07:26.968: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:07:28.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7742" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":294,"completed":282,"skipped":4628,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:07:28.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:809 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3416 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3416 STEP: creating replication controller externalsvc in namespace services-3416 I0701 01:07:29.133094 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3416, replica count: 2 I0701 01:07:32.183631 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0701 01:07:35.183880 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 1 01:07:35.218: INFO: Creating new exec pod Jul 1 01:07:39.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3416 execpodkjv8m -- /bin/sh -x -c nslookup clusterip-service' Jul 1 01:07:39.545: INFO: stderr: "I0701 01:07:39.424451 3994 log.go:172] (0xc0000e78c0) (0xc000843180) Create stream\nI0701 01:07:39.424522 3994 log.go:172] (0xc0000e78c0) (0xc000843180) Stream added, broadcasting: 1\nI0701 01:07:39.426486 3994 log.go:172] (0xc0000e78c0) Reply frame received for 1\nI0701 01:07:39.426524 3994 log.go:172] (0xc0000e78c0) (0xc0008437c0) Create stream\nI0701 01:07:39.426540 3994 log.go:172] (0xc0000e78c0) (0xc0008437c0) Stream added, broadcasting: 3\nI0701 01:07:39.427581 3994 log.go:172] (0xc0000e78c0) Reply frame received for 3\nI0701 01:07:39.427622 3994 log.go:172] (0xc0000e78c0) (0xc000832780) Create stream\nI0701 01:07:39.427648 3994 log.go:172] (0xc0000e78c0) (0xc000832780) Stream added, broadcasting: 5\nI0701 01:07:39.428546 3994 log.go:172] (0xc0000e78c0) Reply frame received for 5\nI0701 01:07:39.515006 3994 log.go:172] (0xc0000e78c0) Data frame received for 5\nI0701 01:07:39.515031 3994 log.go:172] (0xc000832780) (5) Data frame handling\nI0701 01:07:39.515053 3994 log.go:172] (0xc000832780) (5) Data frame sent\n+ nslookup clusterip-service\nI0701 01:07:39.532691 3994 log.go:172] (0xc0000e78c0) Data frame received for 3\nI0701 01:07:39.532714 3994 log.go:172] (0xc0008437c0) (3) Data frame handling\nI0701 01:07:39.532735 3994 log.go:172] (0xc0008437c0) (3) Data frame sent\nI0701 01:07:39.533690 3994 log.go:172] (0xc0000e78c0) Data frame received for 3\nI0701 01:07:39.533725 3994 log.go:172] (0xc0008437c0) (3) Data frame handling\nI0701 01:07:39.533748 3994 log.go:172] (0xc0008437c0) (3) Data frame sent\nI0701 01:07:39.534083 3994 log.go:172] (0xc0000e78c0) Data frame received for 3\nI0701 01:07:39.534120 3994 log.go:172] (0xc0008437c0) (3) Data frame handling\nI0701 01:07:39.534272 3994 log.go:172] (0xc0000e78c0) Data frame received for 5\nI0701 01:07:39.534298 3994 log.go:172] (0xc000832780) (5) Data frame handling\nI0701 01:07:39.536136 3994 log.go:172] (0xc0000e78c0) Data frame received for 1\nI0701 01:07:39.536168 3994 log.go:172] (0xc000843180) (1) Data frame handling\nI0701 01:07:39.536271 3994 log.go:172] (0xc000843180) (1) Data frame sent\nI0701 01:07:39.536285 3994 log.go:172] (0xc0000e78c0) (0xc000843180) Stream removed, broadcasting: 1\nI0701 01:07:39.536303 3994 log.go:172] (0xc0000e78c0) Go away received\nI0701 01:07:39.536662 3994 log.go:172] (0xc0000e78c0) (0xc000843180) Stream removed, broadcasting: 1\nI0701 01:07:39.536681 3994 log.go:172] (0xc0000e78c0) (0xc0008437c0) Stream removed, broadcasting: 3\nI0701 01:07:39.536690 3994 log.go:172] (0xc0000e78c0) (0xc000832780) Stream removed, broadcasting: 5\n" Jul 1 01:07:39.545: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3416.svc.cluster.local\tcanonical name = externalsvc.services-3416.svc.cluster.local.\nName:\texternalsvc.services-3416.svc.cluster.local\nAddress: 10.101.114.109\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3416, will wait for the garbage collector to delete the pods Jul 1 01:07:39.605: INFO: Deleting ReplicationController externalsvc took: 6.677819ms Jul 1 01:07:39.805: INFO: Terminating ReplicationController externalsvc pods took: 200.27641ms Jul 1 01:07:54.920: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:07:54.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3416" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:813 • [SLOW TEST:26.830 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":294,"completed":283,"skipped":4645,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:07:55.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-5dbad4fb-6108-4fbb-bc49-c274a2b0b355 in namespace container-probe-9483 Jul 1 01:07:59.132: INFO: Started pod liveness-5dbad4fb-6108-4fbb-bc49-c274a2b0b355 in namespace container-probe-9483 STEP: checking the pod's current state and verifying that restartCount is present Jul 1 01:07:59.135: INFO: Initial restart count of pod liveness-5dbad4fb-6108-4fbb-bc49-c274a2b0b355 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:11:59.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9483" for this suite. • [SLOW TEST:244.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":294,"completed":284,"skipped":4666,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:11:59.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-1b5c0fe0-9a22-4576-958a-49666e2024da STEP: Creating configMap with name cm-test-opt-upd-6b993048-959a-4e5d-9d31-9528e7e81ead STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1b5c0fe0-9a22-4576-958a-49666e2024da STEP: Updating configmap cm-test-opt-upd-6b993048-959a-4e5d-9d31-9528e7e81ead STEP: Creating configMap with name cm-test-opt-create-e52561c8-b8c2-4449-ba81-e9c51d943204 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:13:39.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9891" for this suite. • [SLOW TEST:99.175 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":294,"completed":285,"skipped":4680,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:13:39.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jul 1 01:13:39.102: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jul 1 01:13:50.629: INFO: >>> kubeConfig: /root/.kube/config Jul 1 01:13:52.545: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:14:03.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1294" for this suite. • [SLOW TEST:24.030 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":294,"completed":286,"skipped":4680,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:14:03.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 01:14:03.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b" in namespace "projected-7972" to be "Succeeded or Failed" Jul 1 01:14:03.330: INFO: Pod "downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.02725ms Jul 1 01:14:05.346: INFO: Pod "downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030511288s Jul 1 01:14:07.555: INFO: Pod "downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.239168028s STEP: Saw pod success Jul 1 01:14:07.555: INFO: Pod "downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b" satisfied condition "Succeeded or Failed" Jul 1 01:14:07.557: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b container client-container: STEP: delete the pod Jul 1 01:14:07.589: INFO: Waiting for pod downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b to disappear Jul 1 01:14:07.596: INFO: Pod downwardapi-volume-d1fc3f45-403a-4f39-9011-29f0863fae2b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:14:07.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7972" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":294,"completed":287,"skipped":4697,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:14:07.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-688 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-688 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-688 Jul 1 01:14:08.334: INFO: Found 0 stateful pods, waiting for 1 Jul 1 01:14:18.350: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 1 01:14:18.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 01:14:18.586: INFO: stderr: "I0701 01:14:18.491810 4016 log.go:172] (0xc0000e0370) (0xc0002ba640) Create stream\nI0701 01:14:18.491899 4016 log.go:172] (0xc0000e0370) (0xc0002ba640) Stream added, broadcasting: 1\nI0701 01:14:18.494219 4016 log.go:172] (0xc0000e0370) Reply frame received for 1\nI0701 01:14:18.494260 4016 log.go:172] (0xc0000e0370) (0xc0005cbb80) Create stream\nI0701 01:14:18.494275 4016 log.go:172] (0xc0000e0370) (0xc0005cbb80) Stream added, broadcasting: 3\nI0701 01:14:18.495242 4016 log.go:172] (0xc0000e0370) Reply frame received for 3\nI0701 01:14:18.495287 4016 log.go:172] (0xc0000e0370) (0xc0002badc0) Create stream\nI0701 01:14:18.495301 4016 log.go:172] (0xc0000e0370) (0xc0002badc0) Stream added, broadcasting: 5\nI0701 01:14:18.496186 4016 log.go:172] (0xc0000e0370) Reply frame received for 5\nI0701 01:14:18.558345 4016 log.go:172] (0xc0000e0370) Data frame received for 5\nI0701 01:14:18.558377 4016 log.go:172] (0xc0002badc0) (5) Data frame handling\nI0701 01:14:18.558396 4016 log.go:172] (0xc0002badc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 01:14:18.577824 4016 log.go:172] (0xc0000e0370) Data frame received for 3\nI0701 01:14:18.577852 4016 log.go:172] (0xc0005cbb80) (3) Data frame handling\nI0701 01:14:18.577944 4016 log.go:172] (0xc0005cbb80) (3) Data frame sent\nI0701 01:14:18.578101 4016 log.go:172] (0xc0000e0370) Data frame received for 3\nI0701 01:14:18.578110 4016 log.go:172] (0xc0005cbb80) (3) Data frame handling\nI0701 01:14:18.578122 4016 log.go:172] (0xc0000e0370) Data frame received for 5\nI0701 01:14:18.578127 4016 log.go:172] (0xc0002badc0) (5) Data frame handling\nI0701 01:14:18.580134 4016 log.go:172] (0xc0000e0370) Data frame received for 1\nI0701 01:14:18.580151 4016 log.go:172] (0xc0002ba640) (1) Data frame handling\nI0701 01:14:18.580158 4016 log.go:172] (0xc0002ba640) (1) Data frame sent\nI0701 01:14:18.580169 4016 log.go:172] (0xc0000e0370) (0xc0002ba640) Stream removed, broadcasting: 1\nI0701 01:14:18.580234 4016 log.go:172] (0xc0000e0370) Go away received\nI0701 01:14:18.580584 4016 log.go:172] (0xc0000e0370) (0xc0002ba640) Stream removed, broadcasting: 1\nI0701 01:14:18.580596 4016 log.go:172] (0xc0000e0370) (0xc0005cbb80) Stream removed, broadcasting: 3\nI0701 01:14:18.580602 4016 log.go:172] (0xc0000e0370) (0xc0002badc0) Stream removed, broadcasting: 5\n" Jul 1 01:14:18.586: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 01:14:18.586: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 01:14:18.590: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 1 01:14:28.595: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 01:14:28.595: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 01:14:28.616: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:28.616: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC }] Jul 1 01:14:28.616: INFO: Jul 1 01:14:28.616: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 1 01:14:29.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991299543s Jul 1 01:14:30.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.953790532s Jul 1 01:14:31.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.87666954s Jul 1 01:14:32.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.781305333s Jul 1 01:14:33.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.776054466s Jul 1 01:14:34.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.765865064s Jul 1 01:14:35.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.762722913s Jul 1 01:14:36.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.719894426s Jul 1 01:14:37.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 714.817513ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-688 Jul 1 01:14:38.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:14:39.151: INFO: stderr: "I0701 01:14:39.050623 4036 log.go:172] (0xc000a893f0) (0xc00086a3c0) Create stream\nI0701 01:14:39.050675 4036 log.go:172] (0xc000a893f0) (0xc00086a3c0) Stream added, broadcasting: 1\nI0701 01:14:39.055184 4036 log.go:172] (0xc000a893f0) Reply frame received for 1\nI0701 01:14:39.055232 4036 log.go:172] (0xc000a893f0) (0xc000861180) Create stream\nI0701 01:14:39.055250 4036 log.go:172] (0xc000a893f0) (0xc000861180) Stream added, broadcasting: 3\nI0701 01:14:39.056052 4036 log.go:172] (0xc000a893f0) Reply frame received for 3\nI0701 01:14:39.056088 4036 log.go:172] (0xc000a893f0) (0xc0006ae1e0) Create stream\nI0701 01:14:39.056104 4036 log.go:172] (0xc000a893f0) (0xc0006ae1e0) Stream added, broadcasting: 5\nI0701 01:14:39.056819 4036 log.go:172] (0xc000a893f0) Reply frame received for 5\nI0701 01:14:39.141003 4036 log.go:172] (0xc000a893f0) Data frame received for 5\nI0701 01:14:39.141047 4036 log.go:172] (0xc0006ae1e0) (5) Data frame handling\nI0701 01:14:39.141063 4036 log.go:172] (0xc0006ae1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 01:14:39.141080 4036 log.go:172] (0xc000a893f0) Data frame received for 3\nI0701 01:14:39.141091 4036 log.go:172] (0xc000861180) (3) Data frame handling\nI0701 01:14:39.141100 4036 log.go:172] (0xc000861180) (3) Data frame sent\nI0701 01:14:39.141707 4036 log.go:172] (0xc000a893f0) Data frame received for 3\nI0701 01:14:39.141744 4036 log.go:172] (0xc000861180) (3) Data frame handling\nI0701 01:14:39.142370 4036 log.go:172] (0xc000a893f0) Data frame received for 5\nI0701 01:14:39.142392 4036 log.go:172] (0xc0006ae1e0) (5) Data frame handling\nI0701 01:14:39.143470 4036 log.go:172] (0xc000a893f0) Data frame received for 1\nI0701 01:14:39.143493 4036 log.go:172] (0xc00086a3c0) (1) Data frame handling\nI0701 01:14:39.143517 4036 log.go:172] (0xc00086a3c0) (1) Data frame sent\nI0701 01:14:39.143550 4036 log.go:172] (0xc000a893f0) (0xc00086a3c0) Stream removed, broadcasting: 1\nI0701 01:14:39.143607 4036 log.go:172] (0xc000a893f0) Go away received\nI0701 01:14:39.143991 4036 log.go:172] (0xc000a893f0) (0xc00086a3c0) Stream removed, broadcasting: 1\nI0701 01:14:39.144010 4036 log.go:172] (0xc000a893f0) (0xc000861180) Stream removed, broadcasting: 3\nI0701 01:14:39.144020 4036 log.go:172] (0xc000a893f0) (0xc0006ae1e0) Stream removed, broadcasting: 5\n" Jul 1 01:14:39.152: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 01:14:39.152: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 01:14:39.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:14:39.394: INFO: stderr: "I0701 01:14:39.325599 4057 log.go:172] (0xc0009d1760) (0xc00081dc20) Create stream\nI0701 01:14:39.325650 4057 log.go:172] (0xc0009d1760) (0xc00081dc20) Stream added, broadcasting: 1\nI0701 01:14:39.329397 4057 log.go:172] (0xc0009d1760) Reply frame received for 1\nI0701 01:14:39.329443 4057 log.go:172] (0xc0009d1760) (0xc000816500) Create stream\nI0701 01:14:39.329460 4057 log.go:172] (0xc0009d1760) (0xc000816500) Stream added, broadcasting: 3\nI0701 01:14:39.330266 4057 log.go:172] (0xc0009d1760) Reply frame received for 3\nI0701 01:14:39.330299 4057 log.go:172] (0xc0009d1760) (0xc000686960) Create stream\nI0701 01:14:39.330310 4057 log.go:172] (0xc0009d1760) (0xc000686960) Stream added, broadcasting: 5\nI0701 01:14:39.330998 4057 log.go:172] (0xc0009d1760) Reply frame received for 5\nI0701 01:14:39.379220 4057 log.go:172] (0xc0009d1760) Data frame received for 5\nI0701 01:14:39.379238 4057 log.go:172] (0xc000686960) (5) Data frame handling\nI0701 01:14:39.379248 4057 log.go:172] (0xc000686960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0701 01:14:39.386854 4057 log.go:172] (0xc0009d1760) Data frame received for 5\nI0701 01:14:39.386880 4057 log.go:172] (0xc000686960) (5) Data frame handling\nI0701 01:14:39.386892 4057 log.go:172] (0xc000686960) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0701 01:14:39.386936 4057 log.go:172] (0xc0009d1760) Data frame received for 3\nI0701 01:14:39.386975 4057 log.go:172] (0xc000816500) (3) Data frame handling\nI0701 01:14:39.387005 4057 log.go:172] (0xc000816500) (3) Data frame sent\nI0701 01:14:39.387219 4057 log.go:172] (0xc0009d1760) Data frame received for 5\nI0701 01:14:39.387235 4057 log.go:172] (0xc000686960) (5) Data frame handling\nI0701 01:14:39.387247 4057 log.go:172] (0xc000686960) (5) Data frame sent\nI0701 01:14:39.387258 4057 log.go:172] (0xc0009d1760) Data frame received for 3\nI0701 01:14:39.387263 4057 log.go:172] (0xc000816500) (3) Data frame handling\n+ true\nI0701 01:14:39.387299 4057 log.go:172] (0xc0009d1760) Data frame received for 5\nI0701 01:14:39.387318 4057 log.go:172] (0xc000686960) (5) Data frame handling\nI0701 01:14:39.388691 4057 log.go:172] (0xc0009d1760) Data frame received for 1\nI0701 01:14:39.388718 4057 log.go:172] (0xc00081dc20) (1) Data frame handling\nI0701 01:14:39.388737 4057 log.go:172] (0xc00081dc20) (1) Data frame sent\nI0701 01:14:39.388754 4057 log.go:172] (0xc0009d1760) (0xc00081dc20) Stream removed, broadcasting: 1\nI0701 01:14:39.388784 4057 log.go:172] (0xc0009d1760) Go away received\nI0701 01:14:39.389033 4057 log.go:172] (0xc0009d1760) (0xc00081dc20) Stream removed, broadcasting: 1\nI0701 01:14:39.389044 4057 log.go:172] (0xc0009d1760) (0xc000816500) Stream removed, broadcasting: 3\nI0701 01:14:39.389050 4057 log.go:172] (0xc0009d1760) (0xc000686960) Stream removed, broadcasting: 5\n" Jul 1 01:14:39.394: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 01:14:39.394: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 01:14:39.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:14:39.609: INFO: stderr: "I0701 01:14:39.531523 4076 log.go:172] (0xc000ae6000) (0xc000293b80) Create stream\nI0701 01:14:39.531595 4076 log.go:172] (0xc000ae6000) (0xc000293b80) Stream added, broadcasting: 1\nI0701 01:14:39.533810 4076 log.go:172] (0xc000ae6000) Reply frame received for 1\nI0701 01:14:39.533843 4076 log.go:172] (0xc000ae6000) (0xc000293e00) Create stream\nI0701 01:14:39.533854 4076 log.go:172] (0xc000ae6000) (0xc000293e00) Stream added, broadcasting: 3\nI0701 01:14:39.534683 4076 log.go:172] (0xc000ae6000) Reply frame received for 3\nI0701 01:14:39.534708 4076 log.go:172] (0xc000ae6000) (0xc0006c0be0) Create stream\nI0701 01:14:39.534717 4076 log.go:172] (0xc000ae6000) (0xc0006c0be0) Stream added, broadcasting: 5\nI0701 01:14:39.535518 4076 log.go:172] (0xc000ae6000) Reply frame received for 5\nI0701 01:14:39.599850 4076 log.go:172] (0xc000ae6000) Data frame received for 3\nI0701 01:14:39.599892 4076 log.go:172] (0xc000293e00) (3) Data frame handling\nI0701 01:14:39.599904 4076 log.go:172] (0xc000293e00) (3) Data frame sent\nI0701 01:14:39.599913 4076 log.go:172] (0xc000ae6000) Data frame received for 3\nI0701 01:14:39.599927 4076 log.go:172] (0xc000293e00) (3) Data frame handling\nI0701 01:14:39.599962 4076 log.go:172] (0xc000ae6000) Data frame received for 5\nI0701 01:14:39.599972 4076 log.go:172] (0xc0006c0be0) (5) Data frame handling\nI0701 01:14:39.599985 4076 log.go:172] (0xc0006c0be0) (5) Data frame sent\nI0701 01:14:39.599995 4076 log.go:172] (0xc000ae6000) Data frame received for 5\nI0701 01:14:39.600006 4076 log.go:172] (0xc0006c0be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0701 01:14:39.601447 4076 log.go:172] (0xc000ae6000) Data frame received for 1\nI0701 01:14:39.601488 4076 log.go:172] (0xc000293b80) (1) Data frame handling\nI0701 01:14:39.601514 4076 log.go:172] (0xc000293b80) (1) Data frame sent\nI0701 01:14:39.601551 4076 log.go:172] (0xc000ae6000) (0xc000293b80) Stream removed, broadcasting: 1\nI0701 01:14:39.601579 4076 log.go:172] (0xc000ae6000) Go away received\nI0701 01:14:39.601953 4076 log.go:172] (0xc000ae6000) (0xc000293b80) Stream removed, broadcasting: 1\nI0701 01:14:39.601978 4076 log.go:172] (0xc000ae6000) (0xc000293e00) Stream removed, broadcasting: 3\nI0701 01:14:39.601990 4076 log.go:172] (0xc000ae6000) (0xc0006c0be0) Stream removed, broadcasting: 5\n" Jul 1 01:14:39.609: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 1 01:14:39.609: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 1 01:14:39.613: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 1 01:14:39.613: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 1 01:14:39.613: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 1 01:14:39.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 01:14:39.821: INFO: stderr: "I0701 01:14:39.755164 4098 log.go:172] (0xc000a6b290) (0xc000be8320) Create stream\nI0701 01:14:39.755251 4098 log.go:172] (0xc000a6b290) (0xc000be8320) Stream added, broadcasting: 1\nI0701 01:14:39.760636 4098 log.go:172] (0xc000a6b290) Reply frame received for 1\nI0701 01:14:39.760678 4098 log.go:172] (0xc000a6b290) (0xc00042e960) Create stream\nI0701 01:14:39.760691 4098 log.go:172] (0xc000a6b290) (0xc00042e960) Stream added, broadcasting: 3\nI0701 01:14:39.761755 4098 log.go:172] (0xc000a6b290) Reply frame received for 3\nI0701 01:14:39.761786 4098 log.go:172] (0xc000a6b290) (0xc00042fcc0) Create stream\nI0701 01:14:39.761795 4098 log.go:172] (0xc000a6b290) (0xc00042fcc0) Stream added, broadcasting: 5\nI0701 01:14:39.762712 4098 log.go:172] (0xc000a6b290) Reply frame received for 5\nI0701 01:14:39.811446 4098 log.go:172] (0xc000a6b290) Data frame received for 3\nI0701 01:14:39.811479 4098 log.go:172] (0xc00042e960) (3) Data frame handling\nI0701 01:14:39.811494 4098 log.go:172] (0xc00042e960) (3) Data frame sent\nI0701 01:14:39.811540 4098 log.go:172] (0xc000a6b290) Data frame received for 5\nI0701 01:14:39.811552 4098 log.go:172] (0xc00042fcc0) (5) Data frame handling\nI0701 01:14:39.811565 4098 log.go:172] (0xc00042fcc0) (5) Data frame sent\nI0701 01:14:39.811586 4098 log.go:172] (0xc000a6b290) Data frame received for 5\nI0701 01:14:39.811602 4098 log.go:172] (0xc00042fcc0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 01:14:39.811807 4098 log.go:172] (0xc000a6b290) Data frame received for 3\nI0701 01:14:39.811826 4098 log.go:172] (0xc00042e960) (3) Data frame handling\nI0701 01:14:39.813593 4098 log.go:172] (0xc000a6b290) Data frame received for 1\nI0701 01:14:39.813614 4098 log.go:172] (0xc000be8320) (1) Data frame handling\nI0701 01:14:39.813626 4098 log.go:172] (0xc000be8320) (1) Data frame sent\nI0701 01:14:39.813641 4098 log.go:172] (0xc000a6b290) (0xc000be8320) Stream removed, broadcasting: 1\nI0701 01:14:39.813730 4098 log.go:172] (0xc000a6b290) Go away received\nI0701 01:14:39.814113 4098 log.go:172] (0xc000a6b290) (0xc000be8320) Stream removed, broadcasting: 1\nI0701 01:14:39.814144 4098 log.go:172] (0xc000a6b290) (0xc00042e960) Stream removed, broadcasting: 3\nI0701 01:14:39.814160 4098 log.go:172] (0xc000a6b290) (0xc00042fcc0) Stream removed, broadcasting: 5\n" Jul 1 01:14:39.822: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 01:14:39.822: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 01:14:39.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 01:14:40.084: INFO: stderr: "I0701 01:14:39.976638 4118 log.go:172] (0xc00027adc0) (0xc000b6e460) Create stream\nI0701 01:14:39.976718 4118 log.go:172] (0xc00027adc0) (0xc000b6e460) Stream added, broadcasting: 1\nI0701 01:14:39.982721 4118 log.go:172] (0xc00027adc0) Reply frame received for 1\nI0701 01:14:39.982773 4118 log.go:172] (0xc00027adc0) (0xc0005ee780) Create stream\nI0701 01:14:39.982789 4118 log.go:172] (0xc00027adc0) (0xc0005ee780) Stream added, broadcasting: 3\nI0701 01:14:39.983742 4118 log.go:172] (0xc00027adc0) Reply frame received for 3\nI0701 01:14:39.983788 4118 log.go:172] (0xc00027adc0) (0xc0005c08c0) Create stream\nI0701 01:14:39.983800 4118 log.go:172] (0xc00027adc0) (0xc0005c08c0) Stream added, broadcasting: 5\nI0701 01:14:39.984853 4118 log.go:172] (0xc00027adc0) Reply frame received for 5\nI0701 01:14:40.048538 4118 log.go:172] (0xc00027adc0) Data frame received for 5\nI0701 01:14:40.048575 4118 log.go:172] (0xc0005c08c0) (5) Data frame handling\nI0701 01:14:40.048601 4118 log.go:172] (0xc0005c08c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 01:14:40.074380 4118 log.go:172] (0xc00027adc0) Data frame received for 3\nI0701 01:14:40.074418 4118 log.go:172] (0xc0005ee780) (3) Data frame handling\nI0701 01:14:40.074427 4118 log.go:172] (0xc0005ee780) (3) Data frame sent\nI0701 01:14:40.074445 4118 log.go:172] (0xc00027adc0) Data frame received for 5\nI0701 01:14:40.074452 4118 log.go:172] (0xc0005c08c0) (5) Data frame handling\nI0701 01:14:40.074622 4118 log.go:172] (0xc00027adc0) Data frame received for 3\nI0701 01:14:40.074705 4118 log.go:172] (0xc0005ee780) (3) Data frame handling\nI0701 01:14:40.077092 4118 log.go:172] (0xc00027adc0) Data frame received for 1\nI0701 01:14:40.077421 4118 log.go:172] (0xc000b6e460) (1) Data frame handling\nI0701 01:14:40.077449 4118 log.go:172] (0xc000b6e460) (1) Data frame sent\nI0701 01:14:40.077691 4118 log.go:172] (0xc00027adc0) (0xc000b6e460) Stream removed, broadcasting: 1\nI0701 01:14:40.077809 4118 log.go:172] (0xc00027adc0) Go away received\nI0701 01:14:40.078029 4118 log.go:172] (0xc00027adc0) (0xc000b6e460) Stream removed, broadcasting: 1\nI0701 01:14:40.078050 4118 log.go:172] (0xc00027adc0) (0xc0005ee780) Stream removed, broadcasting: 3\nI0701 01:14:40.078064 4118 log.go:172] (0xc00027adc0) (0xc0005c08c0) Stream removed, broadcasting: 5\n" Jul 1 01:14:40.084: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 01:14:40.084: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 01:14:40.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 1 01:14:40.353: INFO: stderr: "I0701 01:14:40.255120 4139 log.go:172] (0xc0000e8370) (0xc0000f3180) Create stream\nI0701 01:14:40.255198 4139 log.go:172] (0xc0000e8370) (0xc0000f3180) Stream added, broadcasting: 1\nI0701 01:14:40.258128 4139 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0701 01:14:40.258165 4139 log.go:172] (0xc0000e8370) (0xc0002ba500) Create stream\nI0701 01:14:40.258183 4139 log.go:172] (0xc0000e8370) (0xc0002ba500) Stream added, broadcasting: 3\nI0701 01:14:40.259181 4139 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0701 01:14:40.259229 4139 log.go:172] (0xc0000e8370) (0xc00036ee60) Create stream\nI0701 01:14:40.259240 4139 log.go:172] (0xc0000e8370) (0xc00036ee60) Stream added, broadcasting: 5\nI0701 01:14:40.260212 4139 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0701 01:14:40.312621 4139 log.go:172] (0xc0000e8370) Data frame received for 5\nI0701 01:14:40.312652 4139 log.go:172] (0xc00036ee60) (5) Data frame handling\nI0701 01:14:40.312674 4139 log.go:172] (0xc00036ee60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0701 01:14:40.343747 4139 log.go:172] (0xc0000e8370) Data frame received for 3\nI0701 01:14:40.343779 4139 log.go:172] (0xc0000e8370) Data frame received for 5\nI0701 01:14:40.343819 4139 log.go:172] (0xc00036ee60) (5) Data frame handling\nI0701 01:14:40.343846 4139 log.go:172] (0xc0002ba500) (3) Data frame handling\nI0701 01:14:40.343860 4139 log.go:172] (0xc0002ba500) (3) Data frame sent\nI0701 01:14:40.343871 4139 log.go:172] (0xc0000e8370) Data frame received for 3\nI0701 01:14:40.343882 4139 log.go:172] (0xc0002ba500) (3) Data frame handling\nI0701 01:14:40.345736 4139 log.go:172] (0xc0000e8370) Data frame received for 1\nI0701 01:14:40.345755 4139 log.go:172] (0xc0000f3180) (1) Data frame handling\nI0701 01:14:40.345772 4139 log.go:172] (0xc0000f3180) (1) Data frame sent\nI0701 01:14:40.345790 4139 log.go:172] (0xc0000e8370) (0xc0000f3180) Stream removed, broadcasting: 1\nI0701 01:14:40.345884 4139 log.go:172] (0xc0000e8370) Go away received\nI0701 01:14:40.346045 4139 log.go:172] (0xc0000e8370) (0xc0000f3180) Stream removed, broadcasting: 1\nI0701 01:14:40.346056 4139 log.go:172] (0xc0000e8370) (0xc0002ba500) Stream removed, broadcasting: 3\nI0701 01:14:40.346062 4139 log.go:172] (0xc0000e8370) (0xc00036ee60) Stream removed, broadcasting: 5\n" Jul 1 01:14:40.353: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 1 01:14:40.353: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 1 01:14:40.353: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 01:14:40.356: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jul 1 01:14:50.363: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 1 01:14:50.363: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 1 01:14:50.363: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 1 01:14:50.377: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:50.377: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC }] Jul 1 01:14:50.377: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:50.377: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:50.377: INFO: Jul 1 01:14:50.377: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 01:14:51.760: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:51.761: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC }] Jul 1 01:14:51.761: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:51.761: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:51.761: INFO: Jul 1 01:14:51.761: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 01:14:52.765: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:52.765: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:08 +0000 UTC }] Jul 1 01:14:52.766: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:52.766: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:52.766: INFO: Jul 1 01:14:52.766: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 1 01:14:53.777: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:53.777: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:53.777: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:53.777: INFO: Jul 1 01:14:53.777: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 1 01:14:54.782: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:54.782: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:54.782: INFO: Jul 1 01:14:54.782: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 01:14:55.787: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:55.787: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:55.787: INFO: Jul 1 01:14:55.787: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 01:14:56.792: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:56.792: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:56.792: INFO: Jul 1 01:14:56.792: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 01:14:57.798: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:57.798: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:57.798: INFO: Jul 1 01:14:57.798: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 01:14:58.803: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:58.803: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:58.803: INFO: Jul 1 01:14:58.803: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 1 01:14:59.808: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:14:59.808: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-01 01:14:28 +0000 UTC }] Jul 1 01:14:59.808: INFO: Jul 1 01:14:59.808: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-688 Jul 1 01:15:00.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:15:00.966: INFO: rc: 1 Jul 1 01:15:00.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jul 1 01:15:10.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:15:11.082: INFO: rc: 1 Jul 1 01:15:11.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:15:21.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:15:21.181: INFO: rc: 1 Jul 1 01:15:21.181: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:15:31.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:15:31.301: INFO: rc: 1 Jul 1 01:15:31.301: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:15:41.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:15:41.404: INFO: rc: 1 Jul 1 01:15:41.404: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:15:51.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:15:51.516: INFO: rc: 1 Jul 1 01:15:51.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:16:01.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:16:01.619: INFO: rc: 1 Jul 1 01:16:01.619: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:16:11.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:16:11.725: INFO: rc: 1 Jul 1 01:16:11.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:16:21.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:16:21.847: INFO: rc: 1 Jul 1 01:16:21.847: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:16:31.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:16:31.951: INFO: rc: 1 Jul 1 01:16:31.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:16:41.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:16:42.069: INFO: rc: 1 Jul 1 01:16:42.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:16:52.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:16:52.172: INFO: rc: 1 Jul 1 01:16:52.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:17:02.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:17:02.287: INFO: rc: 1 Jul 1 01:17:02.287: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:17:12.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:17:12.400: INFO: rc: 1 Jul 1 01:17:12.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:17:22.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:17:26.186: INFO: rc: 1 Jul 1 01:17:26.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:17:36.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:17:36.293: INFO: rc: 1 Jul 1 01:17:36.293: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:17:46.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:17:46.406: INFO: rc: 1 Jul 1 01:17:46.406: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:17:56.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:17:56.512: INFO: rc: 1 Jul 1 01:17:56.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:18:06.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:18:06.615: INFO: rc: 1 Jul 1 01:18:06.615: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:18:16.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:18:16.719: INFO: rc: 1 Jul 1 01:18:16.719: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:18:26.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:18:26.822: INFO: rc: 1 Jul 1 01:18:26.822: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:18:36.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:18:36.924: INFO: rc: 1 Jul 1 01:18:36.924: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:18:46.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:18:47.027: INFO: rc: 1 Jul 1 01:18:47.027: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:18:57.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:18:57.143: INFO: rc: 1 Jul 1 01:18:57.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:19:07.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:19:07.254: INFO: rc: 1 Jul 1 01:19:07.254: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:19:17.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:19:17.362: INFO: rc: 1 Jul 1 01:19:17.362: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:19:27.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:19:27.468: INFO: rc: 1 Jul 1 01:19:27.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:19:37.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:19:37.572: INFO: rc: 1 Jul 1 01:19:37.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:19:47.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:19:47.669: INFO: rc: 1 Jul 1 01:19:47.669: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:19:57.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:19:57.777: INFO: rc: 1 Jul 1 01:19:57.777: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jul 1 01:20:07.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-688 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 1 01:20:07.891: INFO: rc: 1 Jul 1 01:20:07.891: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Jul 1 01:20:07.891: INFO: Scaling statefulset ss to 0 Jul 1 01:20:07.900: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jul 1 01:20:07.902: INFO: Deleting all statefulset in ns statefulset-688 Jul 1 01:20:07.919: INFO: Scaling statefulset ss to 0 Jul 1 01:20:07.938: INFO: Waiting for statefulset status.replicas updated to 0 Jul 1 01:20:07.940: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:20:07.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-688" for this suite. • [SLOW TEST:360.354 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":294,"completed":288,"skipped":4725,"failed":1,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:20:07.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support building a client with a CSR [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 01:20:08.235: INFO: creating CSR Jul 1 01:20:08.237: FAIL: Unexpected error: <*errors.StatusError | 0xc0014c46e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/auth.glob..func2.1() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:117 +0xaa6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a0af00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x360 k8s.io/kubernetes/test/e2e.TestE2E(0xc002a0af00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:141 +0x2b testing.tRunner(0xc002a0af00, 0x4e37068) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "certificates-1973". STEP: Found 0 events. Jul 1 01:20:08.309: INFO: POD NODE PHASE GRACE CONDITIONS Jul 1 01:20:08.309: INFO: Jul 1 01:20:08.314: INFO: Logging node info for node latest-control-plane Jul 1 01:20:08.316: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane b7c23ecc-1548-479e-83f7-eb5444fbe13d 17264780 0 2020-04-29 09:53:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:53:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-07-01 01:16:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:53:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:54:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.11,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3939cf129c9d4d6e85e611ab996d9137,SystemUUID:2573ae1d-4849-412e-9a34-432f95556990,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 1 01:20:08.317: INFO: Logging kubelet events for node latest-control-plane Jul 1 01:20:08.319: INFO: Logging pods the kubelet thinks is on node latest-control-plane Jul 1 01:20:08.342: INFO: kindnet-8x7pf started at 2020-04-29 09:53:53 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container kindnet-cni ready: true, restart count 5 Jul 1 01:20:08.342: INFO: coredns-66bff467f8-8n5vh started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container coredns ready: true, restart count 0 Jul 1 01:20:08.342: INFO: local-path-provisioner-bd4bb6b75-bmf2h started at 2020-04-29 09:54:06 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container local-path-provisioner ready: true, restart count 94 Jul 1 01:20:08.342: INFO: etcd-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container etcd ready: true, restart count 4 Jul 1 01:20:08.342: INFO: kube-apiserver-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container kube-apiserver ready: true, restart count 3 Jul 1 01:20:08.342: INFO: kube-proxy-h8mhz started at 2020-04-29 09:53:54 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 01:20:08.342: INFO: coredns-66bff467f8-qr7l5 started at 2020-04-29 09:54:10 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container coredns ready: true, restart count 0 Jul 1 01:20:08.342: INFO: kube-scheduler-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container kube-scheduler ready: true, restart count 124 Jul 1 01:20:08.342: INFO: kube-controller-manager-latest-control-plane started at 2020-04-29 09:53:36 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.342: INFO: Container kube-controller-manager ready: true, restart count 128 W0701 01:20:08.346768 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 01:20:08.426: INFO: Latency metrics for node latest-control-plane Jul 1 01:20:08.426: INFO: Logging node info for node latest-worker Jul 1 01:20:08.430: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 2f09bb79-b24c-46f4-8a0d-ace124db698c 17264778 0 2020-04-29 09:54:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2020-07-01 01:16:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:54:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 01:16:25 +0000 UTC,LastTransitionTime:2020-04-29 09:54:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.13,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:83dc4a3bd84a4693999c93a6c8c5f678,SystemUUID:66e94596-e77d-487e-8e4a-bc652b040cea,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85 docker.io/aquasec/kube-hunter:latest],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:c42be6eafdbe71363ad6a2035fe53f12dbe36aab19a1a3c015231e97cd11d986],SizeBytes:8039911,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:6da1996cf654bbc10175028832d6ffb92720272d0deca971bb296ea9092f4273],SizeBytes:8039845,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5979eaa13cb8b9b2027f4e75bb350a5af70d73719f2a260fa50f593ef63e857b docker.io/aquasec/kube-bench:latest],SizeBytes:8038593,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bab47f459428d6cc682ec6b7cffd4203ce53c413748fe366f2533d0cda2808ce],SizeBytes:8037981,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:cab37ac2de78ddbc6655eddae38239ebafdf79a7934bc53361e1524a2ed5ab56],SizeBytes:8035885,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:3a320776f9146d4efff6162d38f4d355e24cd852adb1ff5f8e32f1b23e4e33fa docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 1 01:20:08.431: INFO: Logging kubelet events for node latest-worker Jul 1 01:20:08.434: INFO: Logging pods the kubelet thinks is on node latest-worker Jul 1 01:20:08.455: INFO: kube-proxy-c8n27 started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.455: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 01:20:08.455: INFO: kindnet-hg2tf started at 2020-04-29 09:54:13 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.455: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 01:20:08.455: INFO: rally-c184502e-30nwopzm started at 2020-05-11 08:48:25 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.455: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 Jul 1 01:20:08.455: INFO: rally-c184502e-30nwopzm-7fmqm started at 2020-05-11 08:48:29 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.455: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 W0701 01:20:08.465271 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 01:20:08.519: INFO: Latency metrics for node latest-worker Jul 1 01:20:08.519: INFO: Logging node info for node latest-worker2 Jul 1 01:20:08.534: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 edb8c16e-16f9-40fa-97b0-84ba80a01b1f 17265062 0 2020-04-29 09:54:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-04-29 09:54:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2020-04-29 09:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2020-07-01 01:18:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-07-01 01:18:04 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-07-01 01:18:04 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-07-01 01:18:04 +0000 UTC,LastTransitionTime:2020-04-29 09:54:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-07-01 01:18:04 +0000 UTC,LastTransitionTime:2020-04-29 09:54:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.12,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a92a0b35db3a4f1fb7e74bf96e498c99,SystemUUID:8fa82d10-b80f-4f70-a9ff-665f94ff4ecc,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.18.2,KubeProxyVersion:v1.18.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.18.2],SizeBytes:146648881,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.18.2],SizeBytes:132860030,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.18.2],SizeBytes:132826433,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:31a93c2501d1648258f610a15bbf40a41d4f10c319a621d5f8ab077d87fcf4b7 docker.io/aquasec/kube-hunter:latest],SizeBytes:127839307,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:d0af3efaa83cf2106879b7fd3972faaee44a0d4a82db97b27f33f8c71aa450b3],SizeBytes:127384616,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:9e6d47f5fb42621781fac92b9f8f86a7e00596fd5c022472a51d33b8c6638b85],SizeBytes:126124611,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:5a7b70d343cfaeff79f6e6a8f473983a5eb7ca52f723aa8aa226aad4ee5b96e3],SizeBytes:125323634,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:795d89480038d62363491066edd962a3f0042c338d4d9feb3f4db23ac659fb40],SizeBytes:124499152,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.18.2],SizeBytes:113095985,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7],SizeBytes:104221097,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13],SizeBytes:45704260,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.7],SizeBytes:43921887,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:1e2b01ec091289327cd7e1b527c11b95db710ace489c9bd665c0d771c0225729],SizeBytes:8039938,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:9d86125c0409a16346857dbda530cf29583c87f186281745f539c12e3dcd38a7],SizeBytes:8039918,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:bdfc3a8aeed63e545ab0df01806707219ffb785bca75e08cbee043075dedfb3c],SizeBytes:8039898,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:a3fe72ad3946d830134b92e5c922a92d4aeb594f0445d178f9e2d610b1be04b5 docker.io/aquasec/kube-bench:latest],SizeBytes:8039861,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:ee55386ef35bea93a3a0900fd714038bebd156e0448addf839f38093dbbaace9],SizeBytes:8029111,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:3a320776f9146d4efff6162d38f4d355e24cd852adb1ff5f8e32f1b23e4e33fa docker.io/library/busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 docker.io/library/busybox:latest],SizeBytes:767885,},ContainerImage{Names:[docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12],SizeBytes:764955,},ContainerImage{Names:[docker.io/library/busybox@sha256:52cfc475afdd697afd2dbe1a3761c8001bf3ba39f76819c922128c088869d339 docker.io/library/busybox@sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c],SizeBytes:764948,},ContainerImage{Names:[docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209],SizeBytes:764556,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jul 1 01:20:08.535: INFO: Logging kubelet events for node latest-worker2 Jul 1 01:20:08.538: INFO: Logging pods the kubelet thinks is on node latest-worker2 Jul 1 01:20:08.559: INFO: kindnet-jl4dn started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.559: INFO: Container kindnet-cni ready: true, restart count 7 Jul 1 01:20:08.559: INFO: kube-proxy-pcmmp started at 2020-04-29 09:54:11 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.559: INFO: Container kube-proxy ready: true, restart count 0 Jul 1 01:20:08.559: INFO: rally-c184502e-ept97j69-6xvbj started at 2020-05-11 08:48:03 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.559: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 Jul 1 01:20:08.559: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 started at 2020-05-12 09:11:35 +0000 UTC (0+1 container statuses recorded) Jul 1 01:20:08.559: INFO: Container terminate-cmd-rpa ready: true, restart count 2 W0701 01:20:08.563812 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 1 01:20:08.623: INFO: Latency metrics for node latest-worker2 Jul 1 01:20:08.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-1973" for this suite. • Failure [0.676 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support building a client with a CSR [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Jul 1 01:20:08.237: Unexpected error: <*errors.StatusError | 0xc0014c46e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:117 ------------------------------ {"msg":"FAILED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]","total":294,"completed":288,"skipped":4752,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:20:08.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Jul 1 01:20:08.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8" in namespace "projected-4773" to be "Succeeded or Failed" Jul 1 01:20:08.753: INFO: Pod "downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.398082ms Jul 1 01:20:10.757: INFO: Pod "downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021853457s Jul 1 01:20:12.762: INFO: Pod "downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026134869s STEP: Saw pod success Jul 1 01:20:12.762: INFO: Pod "downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8" satisfied condition "Succeeded or Failed" Jul 1 01:20:12.765: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8 container client-container: STEP: delete the pod Jul 1 01:20:13.033: INFO: Waiting for pod downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8 to disappear Jul 1 01:20:13.044: INFO: Pod downwardapi-volume-def0c4f4-f7c2-4ade-82ec-6c67a4277dd8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:20:13.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4773" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":294,"completed":289,"skipped":4787,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:20:13.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:20:19.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3175" for this suite. STEP: Destroying namespace "nsdeletetest-181" for this suite. Jul 1 01:20:19.463: INFO: Namespace nsdeletetest-181 was already deleted STEP: Destroying namespace "nsdeletetest-6771" for this suite. • [SLOW TEST:6.417 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":294,"completed":290,"skipped":4792,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:20:19.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0e9577e6-1796-4021-9af4-532b9416bdd0 STEP: Creating a pod to test consume secrets Jul 1 01:20:19.585: INFO: Waiting up to 5m0s for pod "pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368" in namespace "secrets-971" to be "Succeeded or Failed" Jul 1 01:20:19.590: INFO: Pod "pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368": Phase="Pending", Reason="", readiness=false. Elapsed: 5.006686ms Jul 1 01:20:21.595: INFO: Pod "pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009854877s Jul 1 01:20:23.606: INFO: Pod "pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021391307s STEP: Saw pod success Jul 1 01:20:23.606: INFO: Pod "pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368" satisfied condition "Succeeded or Failed" Jul 1 01:20:23.609: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368 container secret-volume-test: STEP: delete the pod Jul 1 01:20:23.641: INFO: Waiting for pod pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368 to disappear Jul 1 01:20:23.650: INFO: Pod pod-secrets-4c4dbe1e-6a71-403e-9d99-d350f9584368 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:20:23.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-971" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":294,"completed":291,"skipped":4793,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jul 1 01:20:23.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 1 01:20:27.756: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2509 PodName:pod-sharedvolume-228d0b40-eb2b-4d2d-8391-73bf93b5d59b ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 1 01:20:27.756: INFO: >>> kubeConfig: /root/.kube/config I0701 01:20:27.795425 8 log.go:172] (0xc001eb56b0) (0xc0011d1860) Create stream I0701 01:20:27.795461 8 log.go:172] (0xc001eb56b0) (0xc0011d1860) Stream added, broadcasting: 1 I0701 01:20:27.797679 8 log.go:172] (0xc001eb56b0) Reply frame received for 1 I0701 01:20:27.797713 8 log.go:172] (0xc001eb56b0) (0xc0011d1900) Create stream I0701 01:20:27.797726 8 log.go:172] (0xc001eb56b0) (0xc0011d1900) Stream added, broadcasting: 3 I0701 01:20:27.798616 8 log.go:172] (0xc001eb56b0) Reply frame received for 3 I0701 01:20:27.798633 8 log.go:172] (0xc001eb56b0) (0xc0011d19a0) Create stream I0701 01:20:27.798639 8 log.go:172] (0xc001eb56b0) (0xc0011d19a0) Stream added, broadcasting: 5 I0701 01:20:27.799464 8 log.go:172] (0xc001eb56b0) Reply frame received for 5 I0701 01:20:27.862521 8 log.go:172] (0xc001eb56b0) Data frame received for 5 I0701 01:20:27.862569 8 log.go:172] (0xc0011d19a0) (5) Data frame handling I0701 01:20:27.862615 8 log.go:172] (0xc001eb56b0) Data frame received for 3 I0701 01:20:27.862680 8 log.go:172] (0xc0011d1900) (3) Data frame handling I0701 01:20:27.862702 8 log.go:172] (0xc0011d1900) (3) Data frame sent I0701 01:20:27.862730 8 log.go:172] (0xc001eb56b0) Data frame received for 3 I0701 01:20:27.862752 8 log.go:172] (0xc0011d1900) (3) Data frame handling I0701 01:20:27.864454 8 log.go:172] (0xc001eb56b0) Data frame received for 1 I0701 01:20:27.864477 8 log.go:172] (0xc0011d1860) (1) Data frame handling I0701 01:20:27.864488 8 log.go:172] (0xc0011d1860) (1) Data frame sent I0701 01:20:27.864508 8 log.go:172] (0xc001eb56b0) (0xc0011d1860) Stream removed, broadcasting: 1 I0701 01:20:27.864533 8 log.go:172] (0xc001eb56b0) Go away received I0701 01:20:27.864656 8 log.go:172] (0xc001eb56b0) (0xc0011d1860) Stream removed, broadcasting: 1 I0701 01:20:27.864677 8 log.go:172] (0xc001eb56b0) (0xc0011d1900) Stream removed, broadcasting: 3 I0701 01:20:27.864690 8 log.go:172] (0xc001eb56b0) (0xc0011d19a0) Stream removed, broadcasting: 5 Jul 1 01:20:27.864: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jul 1 01:20:27.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2509" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":294,"completed":292,"skipped":4796,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} SSSSSSSSSSSSJul 1 01:20:27.874: INFO: Running AfterSuite actions on all nodes Jul 1 01:20:27.874: INFO: Running AfterSuite actions on node 1 Jul 1 01:20:27.874: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":294,"completed":292,"skipped":4808,"failed":2,"failures":["[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","[sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR [Conformance]"]} Summarizing 2 Failures: [Fail] [sig-auth] Certificates API [Privileged:ClusterAdmin] [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:231 [Fail] [sig-auth] Certificates API [Privileged:ClusterAdmin] [It] should support building a client with a CSR [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:117 Ran 294 of 5102 Specs in 6071.086 seconds FAIL! -- 292 Passed | 2 Failed | 0 Pending | 4808 Skipped --- FAIL: TestE2E (6071.17s) FAIL