I0403 23:37:34.614154 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0403 23:37:34.614311 7 e2e.go:124] Starting e2e run "cad17071-4b0c-4581-83f9-d423cc9db14b" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585957053 - Will randomize all specs Will run 275 of 4992 specs Apr 3 23:37:34.666: INFO: >>> kubeConfig: /root/.kube/config Apr 3 23:37:34.672: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 3 23:37:34.698: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 3 23:37:34.734: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 3 23:37:34.734: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 3 23:37:34.734: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 3 23:37:34.749: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 3 23:37:34.749: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 3 23:37:34.749: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Apr 3 23:37:34.750: INFO: kube-apiserver version: v1.17.0 Apr 3 23:37:34.750: INFO: >>> kubeConfig: /root/.kube/config Apr 3 23:37:34.756: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:37:34.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Apr 3 23:37:34.849: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:37:35.251: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:37:37.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553855, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553855, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:37:40.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:37:40.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3406-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:37:41.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9672" for this suite. STEP: Destroying namespace "webhook-9672-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.749 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":1,"skipped":27,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:37:41.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 3 23:37:41.555: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:37:46.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2295" for this suite. • [SLOW TEST:5.215 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":2,"skipped":36,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:37:46.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 23:37:46.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545" in namespace "downward-api-1837" to be "Succeeded or Failed" Apr 3 23:37:46.827: INFO: Pod "downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.944257ms Apr 3 23:37:48.831: INFO: Pod "downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006977104s Apr 3 23:37:50.835: INFO: Pod "downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011029052s STEP: Saw pod success Apr 3 23:37:50.835: INFO: Pod "downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545" satisfied condition "Succeeded or Failed" Apr 3 23:37:50.838: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545 container client-container: STEP: delete the pod Apr 3 23:37:50.893: INFO: Waiting for pod downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545 to disappear Apr 3 23:37:50.903: INFO: Pod downwardapi-volume-2fa01f00-841c-451b-ae50-4c5233e7d545 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:37:50.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1837" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":37,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:37:50.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:38:22.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1544" for this suite. • [SLOW TEST:31.862 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":44,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:38:22.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-6a360802-b9be-4be2-86a0-1b14f37af4f0 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6a360802-b9be-4be2-86a0-1b14f37af4f0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:39:41.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3900" for this suite. • [SLOW TEST:78.526 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:39:41.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:39:42.398: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:39:44.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553982, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553982, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553982, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721553982, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:39:47.425: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:39:47.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6675-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:39:48.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1089" for this suite. STEP: Destroying namespace "webhook-1089-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.380 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":6,"skipped":64,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:39:48.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 3 23:39:48.721: INFO: namespace kubectl-5467 Apr 3 23:39:48.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5467' Apr 3 23:39:51.525: INFO: stderr: "" Apr 3 23:39:51.526: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 3 23:39:52.530: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 23:39:52.530: INFO: Found 0 / 1 Apr 3 23:39:53.530: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 23:39:53.530: INFO: Found 0 / 1 Apr 3 23:39:54.530: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 23:39:54.530: INFO: Found 1 / 1 Apr 3 23:39:54.530: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 3 23:39:54.533: INFO: Selector matched 1 pods for map[app:agnhost] Apr 3 23:39:54.533: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 3 23:39:54.533: INFO: wait on agnhost-master startup in kubectl-5467 Apr 3 23:39:54.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-hp2w6 agnhost-master --namespace=kubectl-5467' Apr 3 23:39:54.645: INFO: stderr: "" Apr 3 23:39:54.646: INFO: stdout: "Paused\n" STEP: exposing RC Apr 3 23:39:54.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5467' Apr 3 23:39:54.788: INFO: stderr: "" Apr 3 23:39:54.788: INFO: stdout: "service/rm2 exposed\n" Apr 3 23:39:54.799: INFO: Service rm2 in namespace kubectl-5467 found. STEP: exposing service Apr 3 23:39:56.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5467' Apr 3 23:39:56.930: INFO: stderr: "" Apr 3 23:39:56.930: INFO: stdout: "service/rm3 exposed\n" Apr 3 23:39:56.940: INFO: Service rm3 in namespace kubectl-5467 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:39:58.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5467" for this suite. • [SLOW TEST:10.277 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":7,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:39:58.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 3 23:39:59.039: INFO: Created pod &Pod{ObjectMeta:{dns-5242 dns-5242 /api/v1/namespaces/dns-5242/pods/dns-5242 275987d6-4c9b-44e2-8202-03f49dbba338 5188555 0 2020-04-03 23:39:59 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-56v7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-56v7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-56v7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 3 23:39:59.047: INFO: The status of Pod dns-5242 is Pending, waiting for it to be Running (with Ready = true) Apr 3 23:40:01.051: INFO: The status of Pod dns-5242 is Pending, waiting for it to be Running (with Ready = true) Apr 3 23:40:03.052: INFO: The status of Pod dns-5242 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 3 23:40:03.052: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5242 PodName:dns-5242 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 23:40:03.052: INFO: >>> kubeConfig: /root/.kube/config I0403 23:40:03.092628 7 log.go:172] (0xc002d20c60) (0xc000fe5ae0) Create stream I0403 23:40:03.092658 7 log.go:172] (0xc002d20c60) (0xc000fe5ae0) Stream added, broadcasting: 1 I0403 23:40:03.104498 7 log.go:172] (0xc002d20c60) Reply frame received for 1 I0403 23:40:03.104542 7 log.go:172] (0xc002d20c60) (0xc000fc17c0) Create stream I0403 23:40:03.104553 7 log.go:172] (0xc002d20c60) (0xc000fc17c0) Stream added, broadcasting: 3 I0403 23:40:03.105682 7 log.go:172] (0xc002d20c60) Reply frame received for 3 I0403 23:40:03.105732 7 log.go:172] (0xc002d20c60) (0xc000fe5b80) Create stream I0403 23:40:03.105756 7 log.go:172] (0xc002d20c60) (0xc000fe5b80) Stream added, broadcasting: 5 I0403 23:40:03.106629 7 log.go:172] (0xc002d20c60) Reply frame received for 5 I0403 23:40:03.179910 7 log.go:172] (0xc002d20c60) Data frame received for 3 I0403 23:40:03.179995 7 log.go:172] (0xc000fc17c0) (3) Data frame handling I0403 23:40:03.180062 7 log.go:172] (0xc000fc17c0) (3) Data frame sent I0403 23:40:03.180777 7 log.go:172] (0xc002d20c60) Data frame received for 5 I0403 23:40:03.180892 7 log.go:172] (0xc000fe5b80) (5) Data frame handling I0403 23:40:03.180944 7 log.go:172] (0xc002d20c60) Data frame received for 3 I0403 23:40:03.180979 7 log.go:172] (0xc000fc17c0) (3) Data frame handling I0403 23:40:03.183202 7 log.go:172] (0xc002d20c60) Data frame received for 1 I0403 23:40:03.183236 7 log.go:172] (0xc000fe5ae0) (1) Data frame handling I0403 23:40:03.183252 7 log.go:172] (0xc000fe5ae0) (1) Data frame sent I0403 23:40:03.183268 7 log.go:172] (0xc002d20c60) (0xc000fe5ae0) Stream removed, broadcasting: 1 I0403 23:40:03.183285 7 log.go:172] (0xc002d20c60) Go away received I0403 23:40:03.183885 7 log.go:172] (0xc002d20c60) (0xc000fe5ae0) Stream removed, broadcasting: 1 I0403 23:40:03.183905 7 log.go:172] (0xc002d20c60) (0xc000fc17c0) Stream removed, broadcasting: 3 I0403 23:40:03.183916 7 log.go:172] (0xc002d20c60) (0xc000fe5b80) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 3 23:40:03.183: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5242 PodName:dns-5242 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 23:40:03.183: INFO: >>> kubeConfig: /root/.kube/config I0403 23:40:03.220964 7 log.go:172] (0xc002d21290) (0xc0004334a0) Create stream I0403 23:40:03.220995 7 log.go:172] (0xc002d21290) (0xc0004334a0) Stream added, broadcasting: 1 I0403 23:40:03.223735 7 log.go:172] (0xc002d21290) Reply frame received for 1 I0403 23:40:03.223776 7 log.go:172] (0xc002d21290) (0xc000b4b720) Create stream I0403 23:40:03.223790 7 log.go:172] (0xc002d21290) (0xc000b4b720) Stream added, broadcasting: 3 I0403 23:40:03.224902 7 log.go:172] (0xc002d21290) Reply frame received for 3 I0403 23:40:03.224940 7 log.go:172] (0xc002d21290) (0xc000b4b9a0) Create stream I0403 23:40:03.224957 7 log.go:172] (0xc002d21290) (0xc000b4b9a0) Stream added, broadcasting: 5 I0403 23:40:03.234183 7 log.go:172] (0xc002d21290) Reply frame received for 5 I0403 23:40:03.328849 7 log.go:172] (0xc002d21290) Data frame received for 3 I0403 23:40:03.328884 7 log.go:172] (0xc000b4b720) (3) Data frame handling I0403 23:40:03.328928 7 log.go:172] (0xc000b4b720) (3) Data frame sent I0403 23:40:03.329382 7 log.go:172] (0xc002d21290) Data frame received for 3 I0403 23:40:03.329416 7 log.go:172] (0xc000b4b720) (3) Data frame handling I0403 23:40:03.329756 7 log.go:172] (0xc002d21290) Data frame received for 5 I0403 23:40:03.329790 7 log.go:172] (0xc000b4b9a0) (5) Data frame handling I0403 23:40:03.330996 7 log.go:172] (0xc002d21290) Data frame received for 1 I0403 23:40:03.331077 7 log.go:172] (0xc0004334a0) (1) Data frame handling I0403 23:40:03.331113 7 log.go:172] (0xc0004334a0) (1) Data frame sent I0403 23:40:03.331144 7 log.go:172] (0xc002d21290) (0xc0004334a0) Stream removed, broadcasting: 1 I0403 23:40:03.331226 7 log.go:172] (0xc002d21290) Go away received I0403 23:40:03.331304 7 log.go:172] (0xc002d21290) (0xc0004334a0) Stream removed, broadcasting: 1 I0403 23:40:03.331339 7 log.go:172] (0xc002d21290) (0xc000b4b720) Stream removed, broadcasting: 3 I0403 23:40:03.331361 7 log.go:172] (0xc002d21290) (0xc000b4b9a0) Stream removed, broadcasting: 5 Apr 3 23:40:03.331: INFO: Deleting pod dns-5242... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:40:03.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5242" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":8,"skipped":94,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:40:03.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 3 23:40:08.139: INFO: Successfully updated pod "labelsupdate1daa4997-24d1-4901-b974-b7afa2a31fd5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:40:12.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2458" for this suite. • [SLOW TEST:8.853 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:40:12.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 23:40:16.290: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:40:16.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8804" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":162,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:40:16.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 3 23:40:16.461: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 3 23:40:21.464: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:40:21.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7204" for this suite. • [SLOW TEST:5.215 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":11,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:40:21.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:40:22.144: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:40:24.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554022, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554022, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554022, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554022, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:40:27.182: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:40:27.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6078" for this suite. STEP: Destroying namespace "webhook-6078-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.751 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":12,"skipped":184,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:40:27.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8167/configmap-test-166eefd5-ee69-43e1-9bb0-892e735ef0e4 STEP: Creating a pod to test consume configMaps Apr 3 23:40:27.412: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426" in namespace "configmap-8167" to be "Succeeded or Failed" Apr 3 23:40:27.476: INFO: Pod "pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426": Phase="Pending", Reason="", readiness=false. Elapsed: 64.133355ms Apr 3 23:40:29.480: INFO: Pod "pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068308161s Apr 3 23:40:31.484: INFO: Pod "pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071942691s STEP: Saw pod success Apr 3 23:40:31.484: INFO: Pod "pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426" satisfied condition "Succeeded or Failed" Apr 3 23:40:31.486: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426 container env-test: STEP: delete the pod Apr 3 23:40:31.506: INFO: Waiting for pod pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426 to disappear Apr 3 23:40:31.522: INFO: Pod pod-configmaps-2e973a71-47d8-4d47-a6c5-4eebdd709426 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:40:31.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8167" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":187,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:40:31.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:40:31.959: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:40:33.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554031, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554031, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554032, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554031, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:40:36.998: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:40:37.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7898" for this suite. STEP: Destroying namespace "webhook-7898-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.900 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":14,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:40:37.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2015, will wait for the garbage collector to delete the pods Apr 3 23:40:41.587: INFO: Deleting Job.batch foo took: 5.992855ms Apr 3 23:40:41.687: INFO: Terminating Job.batch foo pods took: 100.2166ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:41:22.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2015" for this suite. • [SLOW TEST:45.556 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":15,"skipped":216,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:41:23.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4865 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4865 I0403 23:41:23.208664 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4865, replica count: 2 I0403 23:41:26.259158 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 23:41:29.259452 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 3 23:41:29.259: INFO: Creating new exec pod Apr 3 23:41:34.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4865 execpod4q2c4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 3 23:41:34.530: INFO: stderr: "I0403 23:41:34.420778 124 log.go:172] (0xc00003bef0) (0xc000685400) Create stream\nI0403 23:41:34.420860 124 log.go:172] (0xc00003bef0) (0xc000685400) Stream added, broadcasting: 1\nI0403 23:41:34.423692 124 log.go:172] (0xc00003bef0) Reply frame received for 1\nI0403 23:41:34.423741 124 log.go:172] (0xc00003bef0) (0xc0008e4000) Create stream\nI0403 23:41:34.423757 124 log.go:172] (0xc00003bef0) (0xc0008e4000) Stream added, broadcasting: 3\nI0403 23:41:34.424743 124 log.go:172] (0xc00003bef0) Reply frame received for 3\nI0403 23:41:34.424776 124 log.go:172] (0xc00003bef0) (0xc0008e4140) Create stream\nI0403 23:41:34.424786 124 log.go:172] (0xc00003bef0) (0xc0008e4140) Stream added, broadcasting: 5\nI0403 23:41:34.425950 124 log.go:172] (0xc00003bef0) Reply frame received for 5\nI0403 23:41:34.521816 124 log.go:172] (0xc00003bef0) Data frame received for 5\nI0403 23:41:34.521852 124 log.go:172] (0xc0008e4140) (5) Data frame handling\nI0403 23:41:34.521875 124 log.go:172] (0xc0008e4140) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0403 23:41:34.522269 124 log.go:172] (0xc00003bef0) Data frame received for 5\nI0403 23:41:34.522290 124 log.go:172] (0xc0008e4140) (5) Data frame handling\nI0403 23:41:34.522304 124 log.go:172] (0xc0008e4140) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0403 23:41:34.522578 124 log.go:172] (0xc00003bef0) Data frame received for 5\nI0403 23:41:34.522613 124 log.go:172] (0xc0008e4140) (5) Data frame handling\nI0403 23:41:34.522803 124 log.go:172] (0xc00003bef0) Data frame received for 3\nI0403 23:41:34.522837 124 log.go:172] (0xc0008e4000) (3) Data frame handling\nI0403 23:41:34.524766 124 log.go:172] (0xc00003bef0) Data frame received for 1\nI0403 23:41:34.524789 124 log.go:172] (0xc000685400) (1) Data frame handling\nI0403 23:41:34.524802 124 log.go:172] (0xc000685400) (1) Data frame sent\nI0403 23:41:34.524825 124 log.go:172] (0xc00003bef0) (0xc000685400) Stream removed, broadcasting: 1\nI0403 23:41:34.524850 124 log.go:172] (0xc00003bef0) Go away received\nI0403 23:41:34.525470 124 log.go:172] (0xc00003bef0) (0xc000685400) Stream removed, broadcasting: 1\nI0403 23:41:34.525501 124 log.go:172] (0xc00003bef0) (0xc0008e4000) Stream removed, broadcasting: 3\nI0403 23:41:34.525515 124 log.go:172] (0xc00003bef0) (0xc0008e4140) Stream removed, broadcasting: 5\n" Apr 3 23:41:34.530: INFO: stdout: "" Apr 3 23:41:34.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4865 execpod4q2c4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.31.178 80' Apr 3 23:41:34.725: INFO: stderr: "I0403 23:41:34.662009 144 log.go:172] (0xc000b456b0) (0xc000a5a820) Create stream\nI0403 23:41:34.662069 144 log.go:172] (0xc000b456b0) (0xc000a5a820) Stream added, broadcasting: 1\nI0403 23:41:34.666865 144 log.go:172] (0xc000b456b0) Reply frame received for 1\nI0403 23:41:34.666948 144 log.go:172] (0xc000b456b0) (0xc000601720) Create stream\nI0403 23:41:34.666971 144 log.go:172] (0xc000b456b0) (0xc000601720) Stream added, broadcasting: 3\nI0403 23:41:34.668215 144 log.go:172] (0xc000b456b0) Reply frame received for 3\nI0403 23:41:34.668269 144 log.go:172] (0xc000b456b0) (0xc0004e0b40) Create stream\nI0403 23:41:34.668289 144 log.go:172] (0xc000b456b0) (0xc0004e0b40) Stream added, broadcasting: 5\nI0403 23:41:34.669327 144 log.go:172] (0xc000b456b0) Reply frame received for 5\nI0403 23:41:34.720321 144 log.go:172] (0xc000b456b0) Data frame received for 3\nI0403 23:41:34.720350 144 log.go:172] (0xc000601720) (3) Data frame handling\nI0403 23:41:34.720371 144 log.go:172] (0xc000b456b0) Data frame received for 5\nI0403 23:41:34.720394 144 log.go:172] (0xc0004e0b40) (5) Data frame handling\nI0403 23:41:34.720415 144 log.go:172] (0xc0004e0b40) (5) Data frame sent\nI0403 23:41:34.720427 144 log.go:172] (0xc000b456b0) Data frame received for 5\n+ nc -zv -t -w 2 10.96.31.178 80\nConnection to 10.96.31.178 80 port [tcp/http] succeeded!\nI0403 23:41:34.720445 144 log.go:172] (0xc0004e0b40) (5) Data frame handling\nI0403 23:41:34.722065 144 log.go:172] (0xc000b456b0) Data frame received for 1\nI0403 23:41:34.722079 144 log.go:172] (0xc000a5a820) (1) Data frame handling\nI0403 23:41:34.722089 144 log.go:172] (0xc000a5a820) (1) Data frame sent\nI0403 23:41:34.722100 144 log.go:172] (0xc000b456b0) (0xc000a5a820) Stream removed, broadcasting: 1\nI0403 23:41:34.722337 144 log.go:172] (0xc000b456b0) Go away received\nI0403 23:41:34.722366 144 log.go:172] (0xc000b456b0) (0xc000a5a820) Stream removed, broadcasting: 1\nI0403 23:41:34.722375 144 log.go:172] (0xc000b456b0) (0xc000601720) Stream removed, broadcasting: 3\nI0403 23:41:34.722380 144 log.go:172] (0xc000b456b0) (0xc0004e0b40) Stream removed, broadcasting: 5\n" Apr 3 23:41:34.726: INFO: stdout: "" Apr 3 23:41:34.726: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:41:34.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4865" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.772 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":16,"skipped":222,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:41:34.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 3 23:41:34.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6849' Apr 3 23:41:35.136: INFO: stderr: "" Apr 3 23:41:35.136: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 23:41:35.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6849' Apr 3 23:41:35.255: INFO: stderr: "" Apr 3 23:41:35.255: INFO: stdout: "update-demo-nautilus-5nxb7 update-demo-nautilus-rsk94 " Apr 3 23:41:35.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5nxb7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6849' Apr 3 23:41:35.343: INFO: stderr: "" Apr 3 23:41:35.343: INFO: stdout: "" Apr 3 23:41:35.343: INFO: update-demo-nautilus-5nxb7 is created but not running Apr 3 23:41:40.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6849' Apr 3 23:41:40.446: INFO: stderr: "" Apr 3 23:41:40.446: INFO: stdout: "update-demo-nautilus-5nxb7 update-demo-nautilus-rsk94 " Apr 3 23:41:40.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5nxb7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6849' Apr 3 23:41:40.535: INFO: stderr: "" Apr 3 23:41:40.535: INFO: stdout: "true" Apr 3 23:41:40.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5nxb7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6849' Apr 3 23:41:40.641: INFO: stderr: "" Apr 3 23:41:40.641: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 23:41:40.641: INFO: validating pod update-demo-nautilus-5nxb7 Apr 3 23:41:40.695: INFO: got data: { "image": "nautilus.jpg" } Apr 3 23:41:40.695: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 23:41:40.695: INFO: update-demo-nautilus-5nxb7 is verified up and running Apr 3 23:41:40.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rsk94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6849' Apr 3 23:41:40.787: INFO: stderr: "" Apr 3 23:41:40.787: INFO: stdout: "true" Apr 3 23:41:40.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rsk94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6849' Apr 3 23:41:40.887: INFO: stderr: "" Apr 3 23:41:40.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 23:41:40.887: INFO: validating pod update-demo-nautilus-rsk94 Apr 3 23:41:40.892: INFO: got data: { "image": "nautilus.jpg" } Apr 3 23:41:40.892: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 23:41:40.892: INFO: update-demo-nautilus-rsk94 is verified up and running STEP: using delete to clean up resources Apr 3 23:41:40.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6849' Apr 3 23:41:40.990: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 23:41:40.990: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 3 23:41:40.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6849' Apr 3 23:41:41.099: INFO: stderr: "No resources found in kubectl-6849 namespace.\n" Apr 3 23:41:41.099: INFO: stdout: "" Apr 3 23:41:41.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6849 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 23:41:41.190: INFO: stderr: "" Apr 3 23:41:41.190: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:41:41.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6849" for this suite. • [SLOW TEST:6.471 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":17,"skipped":232,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:41:41.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:41:45.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5353" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:41:45.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7434.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7434.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.133.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.133.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.133.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.133.235_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7434.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7434.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7434.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.133.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.133.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.133.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.133.235_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 23:41:51.607: INFO: Unable to read wheezy_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.611: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.613: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.616: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.639: INFO: Unable to read jessie_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.645: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:51.669: INFO: Lookups using dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0 failed for: [wheezy_udp@dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_udp@dns-test-service.dns-7434.svc.cluster.local jessie_tcp@dns-test-service.dns-7434.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local] Apr 3 23:41:56.674: INFO: Unable to read wheezy_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.706: INFO: Unable to read jessie_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.712: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:41:56.733: INFO: Lookups using dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0 failed for: [wheezy_udp@dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_udp@dns-test-service.dns-7434.svc.cluster.local jessie_tcp@dns-test-service.dns-7434.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local] Apr 3 23:42:01.674: INFO: Unable to read wheezy_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.678: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.682: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.708: INFO: Unable to read jessie_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.711: INFO: Unable to read jessie_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.713: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.716: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:01.733: INFO: Lookups using dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0 failed for: [wheezy_udp@dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_udp@dns-test-service.dns-7434.svc.cluster.local jessie_tcp@dns-test-service.dns-7434.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local] Apr 3 23:42:06.674: INFO: Unable to read wheezy_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.710: INFO: Unable to read jessie_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.713: INFO: Unable to read jessie_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.716: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.718: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:06.736: INFO: Lookups using dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0 failed for: [wheezy_udp@dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_udp@dns-test-service.dns-7434.svc.cluster.local jessie_tcp@dns-test-service.dns-7434.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local] Apr 3 23:42:11.674: INFO: Unable to read wheezy_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.680: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.720: INFO: Unable to read jessie_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.722: INFO: Unable to read jessie_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.729: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.731: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:11.766: INFO: Lookups using dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0 failed for: [wheezy_udp@dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_udp@dns-test-service.dns-7434.svc.cluster.local jessie_tcp@dns-test-service.dns-7434.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local] Apr 3 23:42:16.682: INFO: Unable to read wheezy_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.685: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.688: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.691: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.734: INFO: Unable to read jessie_udp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.737: INFO: Unable to read jessie_tcp@dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.742: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local from pod dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0: the server could not find the requested resource (get pods dns-test-c894441b-c664-479d-a4ad-5701eef011b0) Apr 3 23:42:16.762: INFO: Lookups using dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0 failed for: [wheezy_udp@dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@dns-test-service.dns-7434.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_udp@dns-test-service.dns-7434.svc.cluster.local jessie_tcp@dns-test-service.dns-7434.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7434.svc.cluster.local] Apr 3 23:42:21.746: INFO: DNS probes using dns-7434/dns-test-c894441b-c664-479d-a4ad-5701eef011b0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:42:22.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7434" for this suite. • [SLOW TEST:36.753 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":19,"skipped":278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:42:22.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-186e025d-accd-44d1-8056-7ad885b0b641 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:42:22.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7169" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":20,"skipped":306,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:42:22.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 3 23:42:27.391: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:42:27.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4684" for this suite. • [SLOW TEST:5.237 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":21,"skipped":312,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:42:27.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-9e29c486-dad7-4000-ae8c-9af1b5d4c46b STEP: Creating a pod to test consume secrets Apr 3 23:42:27.581: INFO: Waiting up to 5m0s for pod "pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db" in namespace "secrets-9572" to be "Succeeded or Failed" Apr 3 23:42:27.627: INFO: Pod "pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db": Phase="Pending", Reason="", readiness=false. Elapsed: 46.243249ms Apr 3 23:42:29.631: INFO: Pod "pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05036098s Apr 3 23:42:31.635: INFO: Pod "pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db": Phase="Running", Reason="", readiness=true. Elapsed: 4.054489007s Apr 3 23:42:33.639: INFO: Pod "pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058586653s STEP: Saw pod success Apr 3 23:42:33.640: INFO: Pod "pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db" satisfied condition "Succeeded or Failed" Apr 3 23:42:33.642: INFO: Trying to get logs from node latest-worker pod pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db container secret-volume-test: STEP: delete the pod Apr 3 23:42:33.671: INFO: Waiting for pod pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db to disappear Apr 3 23:42:33.688: INFO: Pod pod-secrets-40110f9e-a4df-404b-bfbb-62cd6b92d5db no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:42:33.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9572" for this suite. • [SLOW TEST:6.187 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:42:33.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-0dea80d5-bcab-47bf-b162-47f912ff66b3 STEP: Creating a pod to test consume secrets Apr 3 23:42:33.830: INFO: Waiting up to 5m0s for pod "pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956" in namespace "secrets-5943" to be "Succeeded or Failed" Apr 3 23:42:33.874: INFO: Pod "pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956": Phase="Pending", Reason="", readiness=false. Elapsed: 43.856624ms Apr 3 23:42:35.877: INFO: Pod "pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046406937s Apr 3 23:42:37.880: INFO: Pod "pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049516041s STEP: Saw pod success Apr 3 23:42:37.880: INFO: Pod "pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956" satisfied condition "Succeeded or Failed" Apr 3 23:42:37.883: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956 container secret-volume-test: STEP: delete the pod Apr 3 23:42:37.916: INFO: Waiting for pod pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956 to disappear Apr 3 23:42:37.935: INFO: Pod pod-secrets-387ef57f-3b8f-4de2-ad9c-c27a2c6fa956 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:42:37.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5943" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":349,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:42:37.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Apr 3 23:42:37.976: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix054881508/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:42:38.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8324" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":24,"skipped":352,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:42:38.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 3 23:42:46.175: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 23:42:46.187: INFO: Pod pod-with-poststart-http-hook still exists Apr 3 23:42:48.187: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 23:42:48.191: INFO: Pod pod-with-poststart-http-hook still exists Apr 3 23:42:50.187: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 23:42:50.191: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:42:50.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4620" for this suite. • [SLOW TEST:12.137 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":367,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:42:50.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 3 23:42:50.308: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1579 /api/v1/namespaces/watch-1579/configmaps/e2e-watch-test-label-changed 4dc88861-69a9-4ba3-a5c9-964152687be9 5189780 0 2020-04-03 23:42:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 23:42:50.308: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1579 /api/v1/namespaces/watch-1579/configmaps/e2e-watch-test-label-changed 4dc88861-69a9-4ba3-a5c9-964152687be9 5189781 0 2020-04-03 23:42:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 23:42:50.308: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1579 /api/v1/namespaces/watch-1579/configmaps/e2e-watch-test-label-changed 4dc88861-69a9-4ba3-a5c9-964152687be9 5189782 0 2020-04-03 23:42:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 3 23:43:00.355: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1579 /api/v1/namespaces/watch-1579/configmaps/e2e-watch-test-label-changed 4dc88861-69a9-4ba3-a5c9-964152687be9 5189830 0 2020-04-03 23:42:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 23:43:00.355: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1579 /api/v1/namespaces/watch-1579/configmaps/e2e-watch-test-label-changed 4dc88861-69a9-4ba3-a5c9-964152687be9 5189831 0 2020-04-03 23:42:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 23:43:00.355: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1579 /api/v1/namespaces/watch-1579/configmaps/e2e-watch-test-label-changed 4dc88861-69a9-4ba3-a5c9-964152687be9 5189832 0 2020-04-03 23:42:50 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:43:00.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1579" for this suite. • [SLOW TEST:10.175 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":26,"skipped":374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:43:00.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2640 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2640 I0403 23:43:00.511465 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2640, replica count: 2 I0403 23:43:03.561954 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 23:43:06.562248 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 3 23:43:06.562: INFO: Creating new exec pod Apr 3 23:43:11.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2640 execpod2hhzb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 3 23:43:11.845: INFO: stderr: "I0403 23:43:11.731579 407 log.go:172] (0xc00094a000) (0xc00091a000) Create stream\nI0403 23:43:11.731649 407 log.go:172] (0xc00094a000) (0xc00091a000) Stream added, broadcasting: 1\nI0403 23:43:11.736138 407 log.go:172] (0xc00094a000) Reply frame received for 1\nI0403 23:43:11.736187 407 log.go:172] (0xc00094a000) (0xc00069d220) Create stream\nI0403 23:43:11.736205 407 log.go:172] (0xc00094a000) (0xc00069d220) Stream added, broadcasting: 3\nI0403 23:43:11.737604 407 log.go:172] (0xc00094a000) Reply frame received for 3\nI0403 23:43:11.737656 407 log.go:172] (0xc00094a000) (0xc0008e4000) Create stream\nI0403 23:43:11.737679 407 log.go:172] (0xc00094a000) (0xc0008e4000) Stream added, broadcasting: 5\nI0403 23:43:11.738671 407 log.go:172] (0xc00094a000) Reply frame received for 5\nI0403 23:43:11.837000 407 log.go:172] (0xc00094a000) Data frame received for 3\nI0403 23:43:11.837031 407 log.go:172] (0xc00069d220) (3) Data frame handling\nI0403 23:43:11.837101 407 log.go:172] (0xc00094a000) Data frame received for 5\nI0403 23:43:11.837286 407 log.go:172] (0xc0008e4000) (5) Data frame handling\nI0403 23:43:11.837319 407 log.go:172] (0xc0008e4000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0403 23:43:11.837431 407 log.go:172] (0xc00094a000) Data frame received for 5\nI0403 23:43:11.837466 407 log.go:172] (0xc0008e4000) (5) Data frame handling\nI0403 23:43:11.839599 407 log.go:172] (0xc00094a000) Data frame received for 1\nI0403 23:43:11.839636 407 log.go:172] (0xc00091a000) (1) Data frame handling\nI0403 23:43:11.839687 407 log.go:172] (0xc00091a000) (1) Data frame sent\nI0403 23:43:11.839708 407 log.go:172] (0xc00094a000) (0xc00091a000) Stream removed, broadcasting: 1\nI0403 23:43:11.839734 407 log.go:172] (0xc00094a000) Go away received\nI0403 23:43:11.840261 407 log.go:172] (0xc00094a000) (0xc00091a000) Stream removed, broadcasting: 1\nI0403 23:43:11.840289 407 log.go:172] (0xc00094a000) (0xc00069d220) Stream removed, broadcasting: 3\nI0403 23:43:11.840302 407 log.go:172] (0xc00094a000) (0xc0008e4000) Stream removed, broadcasting: 5\n" Apr 3 23:43:11.845: INFO: stdout: "" Apr 3 23:43:11.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2640 execpod2hhzb -- /bin/sh -x -c nc -zv -t -w 2 10.96.216.210 80' Apr 3 23:43:12.044: INFO: stderr: "I0403 23:43:11.969540 430 log.go:172] (0xc0000eaf20) (0xc000784140) Create stream\nI0403 23:43:11.969593 430 log.go:172] (0xc0000eaf20) (0xc000784140) Stream added, broadcasting: 1\nI0403 23:43:11.971820 430 log.go:172] (0xc0000eaf20) Reply frame received for 1\nI0403 23:43:11.971856 430 log.go:172] (0xc0000eaf20) (0xc0006dd360) Create stream\nI0403 23:43:11.971868 430 log.go:172] (0xc0000eaf20) (0xc0006dd360) Stream added, broadcasting: 3\nI0403 23:43:11.972535 430 log.go:172] (0xc0000eaf20) Reply frame received for 3\nI0403 23:43:11.972560 430 log.go:172] (0xc0000eaf20) (0xc0006dd540) Create stream\nI0403 23:43:11.972576 430 log.go:172] (0xc0000eaf20) (0xc0006dd540) Stream added, broadcasting: 5\nI0403 23:43:11.973479 430 log.go:172] (0xc0000eaf20) Reply frame received for 5\nI0403 23:43:12.039149 430 log.go:172] (0xc0000eaf20) Data frame received for 3\nI0403 23:43:12.039209 430 log.go:172] (0xc0006dd360) (3) Data frame handling\nI0403 23:43:12.039249 430 log.go:172] (0xc0000eaf20) Data frame received for 5\nI0403 23:43:12.039276 430 log.go:172] (0xc0006dd540) (5) Data frame handling\nI0403 23:43:12.039306 430 log.go:172] (0xc0006dd540) (5) Data frame sent\nI0403 23:43:12.039328 430 log.go:172] (0xc0000eaf20) Data frame received for 5\nI0403 23:43:12.039351 430 log.go:172] (0xc0006dd540) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.216.210 80\nConnection to 10.96.216.210 80 port [tcp/http] succeeded!\nI0403 23:43:12.040599 430 log.go:172] (0xc0000eaf20) Data frame received for 1\nI0403 23:43:12.040624 430 log.go:172] (0xc000784140) (1) Data frame handling\nI0403 23:43:12.040651 430 log.go:172] (0xc000784140) (1) Data frame sent\nI0403 23:43:12.040669 430 log.go:172] (0xc0000eaf20) (0xc000784140) Stream removed, broadcasting: 1\nI0403 23:43:12.040982 430 log.go:172] (0xc0000eaf20) (0xc000784140) Stream removed, broadcasting: 1\nI0403 23:43:12.041006 430 log.go:172] (0xc0000eaf20) (0xc0006dd360) Stream removed, broadcasting: 3\nI0403 23:43:12.041021 430 log.go:172] (0xc0000eaf20) (0xc0006dd540) Stream removed, broadcasting: 5\nI0403 23:43:12.041052 430 log.go:172] (0xc0000eaf20) Go away received\n" Apr 3 23:43:12.044: INFO: stdout: "" Apr 3 23:43:12.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2640 execpod2hhzb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31241' Apr 3 23:43:12.261: INFO: stderr: "I0403 23:43:12.184238 451 log.go:172] (0xc0006b8790) (0xc00068c1e0) Create stream\nI0403 23:43:12.184300 451 log.go:172] (0xc0006b8790) (0xc00068c1e0) Stream added, broadcasting: 1\nI0403 23:43:12.187923 451 log.go:172] (0xc0006b8790) Reply frame received for 1\nI0403 23:43:12.187990 451 log.go:172] (0xc0006b8790) (0xc0007fd540) Create stream\nI0403 23:43:12.188010 451 log.go:172] (0xc0006b8790) (0xc0007fd540) Stream added, broadcasting: 3\nI0403 23:43:12.189034 451 log.go:172] (0xc0006b8790) Reply frame received for 3\nI0403 23:43:12.189071 451 log.go:172] (0xc0006b8790) (0xc00068c320) Create stream\nI0403 23:43:12.189084 451 log.go:172] (0xc0006b8790) (0xc00068c320) Stream added, broadcasting: 5\nI0403 23:43:12.190154 451 log.go:172] (0xc0006b8790) Reply frame received for 5\nI0403 23:43:12.253770 451 log.go:172] (0xc0006b8790) Data frame received for 5\nI0403 23:43:12.253923 451 log.go:172] (0xc00068c320) (5) Data frame handling\nI0403 23:43:12.254036 451 log.go:172] (0xc00068c320) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31241\nConnection to 172.17.0.13 31241 port [tcp/31241] succeeded!\nI0403 23:43:12.254167 451 log.go:172] (0xc0006b8790) Data frame received for 3\nI0403 23:43:12.254276 451 log.go:172] (0xc0007fd540) (3) Data frame handling\nI0403 23:43:12.254568 451 log.go:172] (0xc0006b8790) Data frame received for 5\nI0403 23:43:12.254593 451 log.go:172] (0xc00068c320) (5) Data frame handling\nI0403 23:43:12.256205 451 log.go:172] (0xc0006b8790) Data frame received for 1\nI0403 23:43:12.256230 451 log.go:172] (0xc00068c1e0) (1) Data frame handling\nI0403 23:43:12.256256 451 log.go:172] (0xc00068c1e0) (1) Data frame sent\nI0403 23:43:12.256274 451 log.go:172] (0xc0006b8790) (0xc00068c1e0) Stream removed, broadcasting: 1\nI0403 23:43:12.256403 451 log.go:172] (0xc0006b8790) Go away received\nI0403 23:43:12.256673 451 log.go:172] (0xc0006b8790) (0xc00068c1e0) Stream removed, broadcasting: 1\nI0403 23:43:12.256694 451 log.go:172] (0xc0006b8790) (0xc0007fd540) Stream removed, broadcasting: 3\nI0403 23:43:12.256703 451 log.go:172] (0xc0006b8790) (0xc00068c320) Stream removed, broadcasting: 5\n" Apr 3 23:43:12.261: INFO: stdout: "" Apr 3 23:43:12.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-2640 execpod2hhzb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31241' Apr 3 23:43:12.509: INFO: stderr: "I0403 23:43:12.433310 471 log.go:172] (0xc000a2afd0) (0xc000ad25a0) Create stream\nI0403 23:43:12.433374 471 log.go:172] (0xc000a2afd0) (0xc000ad25a0) Stream added, broadcasting: 1\nI0403 23:43:12.437085 471 log.go:172] (0xc000a2afd0) Reply frame received for 1\nI0403 23:43:12.437273 471 log.go:172] (0xc000a2afd0) (0xc0006b9680) Create stream\nI0403 23:43:12.437292 471 log.go:172] (0xc000a2afd0) (0xc0006b9680) Stream added, broadcasting: 3\nI0403 23:43:12.438172 471 log.go:172] (0xc000a2afd0) Reply frame received for 3\nI0403 23:43:12.438202 471 log.go:172] (0xc000a2afd0) (0xc000538aa0) Create stream\nI0403 23:43:12.438210 471 log.go:172] (0xc000a2afd0) (0xc000538aa0) Stream added, broadcasting: 5\nI0403 23:43:12.438925 471 log.go:172] (0xc000a2afd0) Reply frame received for 5\nI0403 23:43:12.500791 471 log.go:172] (0xc000a2afd0) Data frame received for 3\nI0403 23:43:12.500830 471 log.go:172] (0xc0006b9680) (3) Data frame handling\nI0403 23:43:12.500863 471 log.go:172] (0xc000a2afd0) Data frame received for 5\nI0403 23:43:12.500889 471 log.go:172] (0xc000538aa0) (5) Data frame handling\nI0403 23:43:12.500916 471 log.go:172] (0xc000538aa0) (5) Data frame sent\nI0403 23:43:12.500943 471 log.go:172] (0xc000a2afd0) Data frame received for 5\nI0403 23:43:12.500968 471 log.go:172] (0xc000538aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31241\nConnection to 172.17.0.12 31241 port [tcp/31241] succeeded!\nI0403 23:43:12.502768 471 log.go:172] (0xc000a2afd0) Data frame received for 1\nI0403 23:43:12.502787 471 log.go:172] (0xc000ad25a0) (1) Data frame handling\nI0403 23:43:12.502817 471 log.go:172] (0xc000ad25a0) (1) Data frame sent\nI0403 23:43:12.502931 471 log.go:172] (0xc000a2afd0) (0xc000ad25a0) Stream removed, broadcasting: 1\nI0403 23:43:12.503119 471 log.go:172] (0xc000a2afd0) Go away received\nI0403 23:43:12.503354 471 log.go:172] (0xc000a2afd0) (0xc000ad25a0) Stream removed, broadcasting: 1\nI0403 23:43:12.503396 471 log.go:172] (0xc000a2afd0) (0xc0006b9680) Stream removed, broadcasting: 3\nI0403 23:43:12.503412 471 log.go:172] (0xc000a2afd0) (0xc000538aa0) Stream removed, broadcasting: 5\n" Apr 3 23:43:12.509: INFO: stdout: "" Apr 3 23:43:12.509: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:43:12.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2640" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.166 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":27,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:43:12.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 23:43:12.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f" in namespace "projected-5315" to be "Succeeded or Failed" Apr 3 23:43:12.639: INFO: Pod "downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.609656ms Apr 3 23:43:14.649: INFO: Pod "downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034933141s Apr 3 23:43:16.654: INFO: Pod "downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039405609s STEP: Saw pod success Apr 3 23:43:16.654: INFO: Pod "downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f" satisfied condition "Succeeded or Failed" Apr 3 23:43:16.657: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f container client-container: STEP: delete the pod Apr 3 23:43:16.679: INFO: Waiting for pod downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f to disappear Apr 3 23:43:16.682: INFO: Pod downwardapi-volume-ec6b8094-5b8f-4b76-9ddd-17c25013142f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:43:16.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5315" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":434,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:43:16.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-52e4de5a-8a44-4912-bcc3-a70003fb8148 in namespace container-probe-6497 Apr 3 23:43:20.856: INFO: Started pod test-webserver-52e4de5a-8a44-4912-bcc3-a70003fb8148 in namespace container-probe-6497 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 23:43:20.862: INFO: Initial restart count of pod test-webserver-52e4de5a-8a44-4912-bcc3-a70003fb8148 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:47:21.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6497" for this suite. • [SLOW TEST:244.816 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:47:21.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 3 23:47:21.786: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 3 23:47:21.824: INFO: Waiting for terminating namespaces to be deleted... Apr 3 23:47:21.837: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 3 23:47:21.883: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:21.883: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 23:47:21.883: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:21.883: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 23:47:21.883: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 3 23:47:21.921: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:21.921: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 23:47:21.921: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:21.921: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160274129497a410], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:47:22.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4531" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":30,"skipped":483,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:47:22.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:47:39.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4497" for this suite. • [SLOW TEST:16.240 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":31,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:47:39.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 3 23:47:39.240: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 3 23:47:39.251: INFO: Waiting for terminating namespaces to be deleted... Apr 3 23:47:39.253: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 3 23:47:39.257: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:39.257: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 23:47:39.257: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:39.257: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 23:47:39.257: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 3 23:47:39.262: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:39.262: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 23:47:39.262: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 3 23:47:39.262: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 3 23:47:39.330: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 3 23:47:39.330: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 3 23:47:39.330: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 3 23:47:39.330: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 3 23:47:39.330: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 3 23:47:39.335: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-02825cf2-e17b-49af-99eb-ad3522ac4d82.16027416a458faf4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7128/filler-pod-02825cf2-e17b-49af-99eb-ad3522ac4d82 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-02825cf2-e17b-49af-99eb-ad3522ac4d82.16027416ed1431fd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-02825cf2-e17b-49af-99eb-ad3522ac4d82.160274171df3a461], Reason = [Created], Message = [Created container filler-pod-02825cf2-e17b-49af-99eb-ad3522ac4d82] STEP: Considering event: Type = [Normal], Name = [filler-pod-02825cf2-e17b-49af-99eb-ad3522ac4d82.1602741731b0255e], Reason = [Started], Message = [Started container filler-pod-02825cf2-e17b-49af-99eb-ad3522ac4d82] STEP: Considering event: Type = [Normal], Name = [filler-pod-9695b976-0527-47d2-9bd4-ae955b863e7d.16027416a54def33], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7128/filler-pod-9695b976-0527-47d2-9bd4-ae955b863e7d to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9695b976-0527-47d2-9bd4-ae955b863e7d.160274171b5c4f30], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9695b976-0527-47d2-9bd4-ae955b863e7d.16027417487c8a07], Reason = [Created], Message = [Created container filler-pod-9695b976-0527-47d2-9bd4-ae955b863e7d] STEP: Considering event: Type = [Normal], Name = [filler-pod-9695b976-0527-47d2-9bd4-ae955b863e7d.160274175790a2b6], Reason = [Started], Message = [Started container filler-pod-9695b976-0527-47d2-9bd4-ae955b863e7d] STEP: Considering event: Type = [Warning], Name = [additional-pod.1602741794ceb972], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:47:44.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7128" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.364 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":32,"skipped":537,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:47:44.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Apr 3 23:47:44.602: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 3 23:47:44.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3945' Apr 3 23:47:44.863: INFO: stderr: "" Apr 3 23:47:44.863: INFO: stdout: "service/agnhost-slave created\n" Apr 3 23:47:44.863: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 3 23:47:44.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3945' Apr 3 23:47:45.120: INFO: stderr: "" Apr 3 23:47:45.120: INFO: stdout: "service/agnhost-master created\n" Apr 3 23:47:45.120: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 3 23:47:45.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3945' Apr 3 23:47:45.402: INFO: stderr: "" Apr 3 23:47:45.402: INFO: stdout: "service/frontend created\n" Apr 3 23:47:45.403: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 3 23:47:45.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3945' Apr 3 23:47:45.672: INFO: stderr: "" Apr 3 23:47:45.672: INFO: stdout: "deployment.apps/frontend created\n" Apr 3 23:47:45.672: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 3 23:47:45.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3945' Apr 3 23:47:45.929: INFO: stderr: "" Apr 3 23:47:45.929: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 3 23:47:45.929: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 3 23:47:45.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3945' Apr 3 23:47:46.160: INFO: stderr: "" Apr 3 23:47:46.160: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 3 23:47:46.160: INFO: Waiting for all frontend pods to be Running. Apr 3 23:47:56.211: INFO: Waiting for frontend to serve content. Apr 3 23:47:56.222: INFO: Trying to add a new entry to the guestbook. Apr 3 23:47:56.234: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 3 23:47:56.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3945' Apr 3 23:47:56.398: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 23:47:56.398: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 3 23:47:56.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3945' Apr 3 23:47:56.518: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 23:47:56.518: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 3 23:47:56.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3945' Apr 3 23:47:56.636: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 23:47:56.636: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 3 23:47:56.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3945' Apr 3 23:47:56.738: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 23:47:56.738: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 3 23:47:56.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3945' Apr 3 23:47:56.865: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 23:47:56.865: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 3 23:47:56.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3945' Apr 3 23:47:56.972: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 23:47:56.972: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:47:56.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3945" for this suite. • [SLOW TEST:12.425 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":33,"skipped":547,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:47:56.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 23:47:57.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786" in namespace "downward-api-6914" to be "Succeeded or Failed" Apr 3 23:47:57.175: INFO: Pod "downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786": Phase="Pending", Reason="", readiness=false. Elapsed: 34.505814ms Apr 3 23:47:59.414: INFO: Pod "downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273027268s Apr 3 23:48:01.418: INFO: Pod "downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786": Phase="Running", Reason="", readiness=true. Elapsed: 4.277533582s Apr 3 23:48:03.424: INFO: Pod "downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.28282225s STEP: Saw pod success Apr 3 23:48:03.424: INFO: Pod "downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786" satisfied condition "Succeeded or Failed" Apr 3 23:48:03.427: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786 container client-container: STEP: delete the pod Apr 3 23:48:03.459: INFO: Waiting for pod downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786 to disappear Apr 3 23:48:03.472: INFO: Pod downwardapi-volume-a55b5bdd-7387-4bf1-8452-ee9563dc1786 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:48:03.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6914" for this suite. • [SLOW TEST:6.498 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:48:03.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-nhzx STEP: Creating a pod to test atomic-volume-subpath Apr 3 23:48:03.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nhzx" in namespace "subpath-1044" to be "Succeeded or Failed" Apr 3 23:48:03.574: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.870516ms Apr 3 23:48:05.579: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007072372s Apr 3 23:48:07.581: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 4.009700062s Apr 3 23:48:09.586: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 6.014073014s Apr 3 23:48:11.590: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 8.018366549s Apr 3 23:48:13.594: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 10.022613595s Apr 3 23:48:15.599: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 12.027091939s Apr 3 23:48:17.603: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 14.031444103s Apr 3 23:48:19.607: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 16.035786474s Apr 3 23:48:21.611: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 18.039724878s Apr 3 23:48:23.615: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 20.043564114s Apr 3 23:48:25.619: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Running", Reason="", readiness=true. Elapsed: 22.047281844s Apr 3 23:48:27.623: INFO: Pod "pod-subpath-test-downwardapi-nhzx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051345549s STEP: Saw pod success Apr 3 23:48:27.623: INFO: Pod "pod-subpath-test-downwardapi-nhzx" satisfied condition "Succeeded or Failed" Apr 3 23:48:27.626: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-nhzx container test-container-subpath-downwardapi-nhzx: STEP: delete the pod Apr 3 23:48:27.659: INFO: Waiting for pod pod-subpath-test-downwardapi-nhzx to disappear Apr 3 23:48:27.669: INFO: Pod pod-subpath-test-downwardapi-nhzx no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-nhzx Apr 3 23:48:27.669: INFO: Deleting pod "pod-subpath-test-downwardapi-nhzx" in namespace "subpath-1044" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:48:27.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1044" for this suite. • [SLOW TEST:24.200 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":35,"skipped":600,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:48:27.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-dc17dd3c-53b6-4171-a090-923401a3e021 STEP: Creating a pod to test consume secrets Apr 3 23:48:27.749: INFO: Waiting up to 5m0s for pod "pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03" in namespace "secrets-3443" to be "Succeeded or Failed" Apr 3 23:48:27.753: INFO: Pod "pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.278112ms Apr 3 23:48:29.757: INFO: Pod "pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008320175s Apr 3 23:48:31.762: INFO: Pod "pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012416847s STEP: Saw pod success Apr 3 23:48:31.762: INFO: Pod "pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03" satisfied condition "Succeeded or Failed" Apr 3 23:48:31.765: INFO: Trying to get logs from node latest-worker pod pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03 container secret-volume-test: STEP: delete the pod Apr 3 23:48:31.796: INFO: Waiting for pod pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03 to disappear Apr 3 23:48:31.801: INFO: Pod pod-secrets-125f48e9-dba9-4efa-88fb-edf9e13d9f03 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:48:31.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3443" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":603,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:48:31.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-3a7d153b-dbdb-40ad-a964-93ab9801fd19 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-3a7d153b-dbdb-40ad-a964-93ab9801fd19 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:49:44.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4418" for this suite. • [SLOW TEST:72.450 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":37,"skipped":613,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:49:44.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:49:44.362: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:49:44.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6017" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":38,"skipped":622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:49:44.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Apr 3 23:49:45.058: INFO: Waiting up to 5m0s for pod "var-expansion-c2757380-95f2-4591-8995-931f2ffde172" in namespace "var-expansion-2344" to be "Succeeded or Failed" Apr 3 23:49:45.062: INFO: Pod "var-expansion-c2757380-95f2-4591-8995-931f2ffde172": Phase="Pending", Reason="", readiness=false. Elapsed: 3.298439ms Apr 3 23:49:47.074: INFO: Pod "var-expansion-c2757380-95f2-4591-8995-931f2ffde172": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015444341s Apr 3 23:49:49.078: INFO: Pod "var-expansion-c2757380-95f2-4591-8995-931f2ffde172": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019567337s STEP: Saw pod success Apr 3 23:49:49.078: INFO: Pod "var-expansion-c2757380-95f2-4591-8995-931f2ffde172" satisfied condition "Succeeded or Failed" Apr 3 23:49:49.081: INFO: Trying to get logs from node latest-worker2 pod var-expansion-c2757380-95f2-4591-8995-931f2ffde172 container dapi-container: STEP: delete the pod Apr 3 23:49:49.113: INFO: Waiting for pod var-expansion-c2757380-95f2-4591-8995-931f2ffde172 to disappear Apr 3 23:49:49.128: INFO: Pod var-expansion-c2757380-95f2-4591-8995-931f2ffde172 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:49:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2344" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":679,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:49:49.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 3 23:49:49.250: INFO: Waiting up to 5m0s for pod "pod-aa2812ee-a6a0-4c38-a625-277ea5a90886" in namespace "emptydir-8484" to be "Succeeded or Failed" Apr 3 23:49:49.260: INFO: Pod "pod-aa2812ee-a6a0-4c38-a625-277ea5a90886": Phase="Pending", Reason="", readiness=false. Elapsed: 9.413373ms Apr 3 23:49:51.263: INFO: Pod "pod-aa2812ee-a6a0-4c38-a625-277ea5a90886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012835728s Apr 3 23:49:53.267: INFO: Pod "pod-aa2812ee-a6a0-4c38-a625-277ea5a90886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016771124s STEP: Saw pod success Apr 3 23:49:53.267: INFO: Pod "pod-aa2812ee-a6a0-4c38-a625-277ea5a90886" satisfied condition "Succeeded or Failed" Apr 3 23:49:53.270: INFO: Trying to get logs from node latest-worker pod pod-aa2812ee-a6a0-4c38-a625-277ea5a90886 container test-container: STEP: delete the pod Apr 3 23:49:53.289: INFO: Waiting for pod pod-aa2812ee-a6a0-4c38-a625-277ea5a90886 to disappear Apr 3 23:49:53.293: INFO: Pod pod-aa2812ee-a6a0-4c38-a625-277ea5a90886 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:49:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8484" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":685,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:49:53.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:49:54.054: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:49:56.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554594, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554594, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554594, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554594, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:49:59.132: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:49:59.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6029" for this suite. STEP: Destroying namespace "webhook-6029-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.081 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":41,"skipped":701,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:49:59.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1905.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1905.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1905.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1905.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1905.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1905.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 23:50:05.550: INFO: DNS probes using dns-1905/dns-test-d083fe9a-67ac-4eff-8d2d-7b00665f362f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:50:05.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1905" for this suite. • [SLOW TEST:6.289 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":42,"skipped":711,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:50:05.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:05.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5387" for this suite. • [SLOW TEST:60.332 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":718,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:06.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 3 23:51:06.055: INFO: >>> kubeConfig: /root/.kube/config Apr 3 23:51:07.994: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:18.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6049" for this suite. • [SLOW TEST:12.544 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":44,"skipped":733,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:18.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:51:22.627: INFO: Waiting up to 5m0s for pod "client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a" in namespace "pods-2540" to be "Succeeded or Failed" Apr 3 23:51:22.644: INFO: Pod "client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.514124ms Apr 3 23:51:24.649: INFO: Pod "client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021042816s Apr 3 23:51:26.652: INFO: Pod "client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024683556s STEP: Saw pod success Apr 3 23:51:26.652: INFO: Pod "client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a" satisfied condition "Succeeded or Failed" Apr 3 23:51:26.656: INFO: Trying to get logs from node latest-worker pod client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a container env3cont: STEP: delete the pod Apr 3 23:51:26.686: INFO: Waiting for pod client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a to disappear Apr 3 23:51:26.690: INFO: Pod client-envvars-9e43596a-93a4-4b99-a7b1-c5f183a3d27a no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:26.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2540" for this suite. • [SLOW TEST:8.148 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":772,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:26.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:30.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9330" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":793,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:30.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:51:30.878: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 3 23:51:33.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4089 create -f -' Apr 3 23:51:36.905: INFO: stderr: "" Apr 3 23:51:36.905: INFO: stdout: "e2e-test-crd-publish-openapi-1888-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 3 23:51:36.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4089 delete e2e-test-crd-publish-openapi-1888-crds test-cr' Apr 3 23:51:37.003: INFO: stderr: "" Apr 3 23:51:37.003: INFO: stdout: "e2e-test-crd-publish-openapi-1888-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 3 23:51:37.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4089 apply -f -' Apr 3 23:51:37.250: INFO: stderr: "" Apr 3 23:51:37.250: INFO: stdout: "e2e-test-crd-publish-openapi-1888-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 3 23:51:37.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4089 delete e2e-test-crd-publish-openapi-1888-crds test-cr' Apr 3 23:51:37.368: INFO: stderr: "" Apr 3 23:51:37.368: INFO: stdout: "e2e-test-crd-publish-openapi-1888-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 3 23:51:37.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1888-crds' Apr 3 23:51:37.591: INFO: stderr: "" Apr 3 23:51:37.591: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1888-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:40.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4089" for this suite. • [SLOW TEST:9.700 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":47,"skipped":801,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:40.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Apr 3 23:51:40.587: INFO: Waiting up to 5m0s for pod "client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2" in namespace "containers-7831" to be "Succeeded or Failed" Apr 3 23:51:40.598: INFO: Pod "client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.194918ms Apr 3 23:51:42.602: INFO: Pod "client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015068562s Apr 3 23:51:44.606: INFO: Pod "client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019423908s STEP: Saw pod success Apr 3 23:51:44.606: INFO: Pod "client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2" satisfied condition "Succeeded or Failed" Apr 3 23:51:44.610: INFO: Trying to get logs from node latest-worker2 pod client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2 container test-container: STEP: delete the pod Apr 3 23:51:44.653: INFO: Waiting for pod client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2 to disappear Apr 3 23:51:44.674: INFO: Pod client-containers-cbfa5aec-4739-4ce0-b6ff-79e0f0a05bc2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:44.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7831" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":807,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:44.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 23:51:44.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583" in namespace "projected-1079" to be "Succeeded or Failed" Apr 3 23:51:44.818: INFO: Pod "downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583": Phase="Pending", Reason="", readiness=false. Elapsed: 63.081362ms Apr 3 23:51:46.821: INFO: Pod "downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06680149s Apr 3 23:51:48.835: INFO: Pod "downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080848019s STEP: Saw pod success Apr 3 23:51:48.835: INFO: Pod "downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583" satisfied condition "Succeeded or Failed" Apr 3 23:51:48.838: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583 container client-container: STEP: delete the pod Apr 3 23:51:48.868: INFO: Waiting for pod downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583 to disappear Apr 3 23:51:48.876: INFO: Pod downwardapi-volume-f1a4b2c8-57ed-43ee-935d-a9f6bf7cc583 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:48.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1079" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":816,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:48.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8304.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8304.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8304.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8304.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 23:51:55.003: INFO: DNS probes using dns-8304/dns-test-2ce0d7ca-fe85-4da0-9765-56835fabc3ea succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:55.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8304" for this suite. • [SLOW TEST:6.174 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":50,"skipped":826,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:55.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0403 23:51:56.216455 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 23:51:56.216: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:51:56.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5637" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":51,"skipped":840,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:51:56.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 3 23:51:56.265: INFO: PodSpec: initContainers in spec.initContainers Apr 3 23:52:50.756: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b7cc3e9d-4f00-4546-8a7a-a8f6ce851b4d", GenerateName:"", Namespace:"init-container-4629", SelfLink:"/api/v1/namespaces/init-container-4629/pods/pod-init-b7cc3e9d-4f00-4546-8a7a-a8f6ce851b4d", UID:"03270cd9-a667-490f-8104-2b41e506db9a", ResourceVersion:"5192569", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721554716, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"265656107"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vmb7d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005f82ac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vmb7d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vmb7d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vmb7d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003f0ca98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d1abd0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f0cc40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f0cc70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003f0cc78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003f0cc7c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554716, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554716, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554716, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554716, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.221", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.221"}}, StartTime:(*v1.Time)(0xc0048ec940), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d1acb0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d1ad20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://57e949763b787c29d1afb5fb3224fbe8dfa43ffa7b8cbbb083a03d0a7f877143", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0048ec980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0048ec960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc003f0cd6f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:52:50.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4629" for this suite. • [SLOW TEST:54.581 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":52,"skipped":856,"failed":0} SSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:52:50.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:52:50.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1101" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":53,"skipped":859,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:52:50.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-6226 STEP: Creating a pod to test atomic-volume-subpath Apr 3 23:52:51.022: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6226" in namespace "subpath-5450" to be "Succeeded or Failed" Apr 3 23:52:51.037: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Pending", Reason="", readiness=false. Elapsed: 15.832643ms Apr 3 23:52:53.041: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019159021s Apr 3 23:52:55.044: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 4.022465578s Apr 3 23:52:57.048: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 6.026888886s Apr 3 23:52:59.056: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 8.03456269s Apr 3 23:53:01.060: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 10.038510751s Apr 3 23:53:03.064: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 12.042213067s Apr 3 23:53:05.068: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 14.046038666s Apr 3 23:53:07.071: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 16.049945404s Apr 3 23:53:09.076: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 18.054075711s Apr 3 23:53:11.080: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 20.058295399s Apr 3 23:53:13.084: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Running", Reason="", readiness=true. Elapsed: 22.0622062s Apr 3 23:53:15.088: INFO: Pod "pod-subpath-test-configmap-6226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066285874s STEP: Saw pod success Apr 3 23:53:15.088: INFO: Pod "pod-subpath-test-configmap-6226" satisfied condition "Succeeded or Failed" Apr 3 23:53:15.091: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-6226 container test-container-subpath-configmap-6226: STEP: delete the pod Apr 3 23:53:15.125: INFO: Waiting for pod pod-subpath-test-configmap-6226 to disappear Apr 3 23:53:15.130: INFO: Pod pod-subpath-test-configmap-6226 no longer exists STEP: Deleting pod pod-subpath-test-configmap-6226 Apr 3 23:53:15.130: INFO: Deleting pod "pod-subpath-test-configmap-6226" in namespace "subpath-5450" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:53:15.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5450" for this suite. • [SLOW TEST:24.206 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":54,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:53:15.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:53:15.245: INFO: Creating ReplicaSet my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd Apr 3 23:53:15.255: INFO: Pod name my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd: Found 0 pods out of 1 Apr 3 23:53:20.259: INFO: Pod name my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd: Found 1 pods out of 1 Apr 3 23:53:20.259: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd" is running Apr 3 23:53:20.277: INFO: Pod "my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd-9hpzd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 23:53:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 23:53:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 23:53:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 23:53:15 +0000 UTC Reason: Message:}]) Apr 3 23:53:20.277: INFO: Trying to dial the pod Apr 3 23:53:25.288: INFO: Controller my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd: Got expected result from replica 1 [my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd-9hpzd]: "my-hostname-basic-75af17ef-53d1-4787-b174-06ef00b19ddd-9hpzd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:53:25.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3754" for this suite. • [SLOW TEST:10.157 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":55,"skipped":905,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:53:25.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 3 23:53:25.372: INFO: Waiting up to 5m0s for pod "pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b" in namespace "emptydir-5457" to be "Succeeded or Failed" Apr 3 23:53:25.379: INFO: Pod "pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.163726ms Apr 3 23:53:27.383: INFO: Pod "pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010933188s Apr 3 23:53:29.388: INFO: Pod "pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015682292s STEP: Saw pod success Apr 3 23:53:29.388: INFO: Pod "pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b" satisfied condition "Succeeded or Failed" Apr 3 23:53:29.391: INFO: Trying to get logs from node latest-worker2 pod pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b container test-container: STEP: delete the pod Apr 3 23:53:29.435: INFO: Waiting for pod pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b to disappear Apr 3 23:53:29.458: INFO: Pod pod-e6fc9b6a-1b2b-462f-a66c-d37a30014c4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:53:29.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5457" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:53:29.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:53:46.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7012" for this suite. • [SLOW TEST:17.134 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":57,"skipped":940,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:53:46.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 3 23:53:46.719: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8244 /api/v1/namespaces/watch-8244/configmaps/e2e-watch-test-resource-version 6508dd73-6852-4c62-9106-9409e4000ef7 5192858 0 2020-04-03 23:53:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 3 23:53:46.719: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8244 /api/v1/namespaces/watch-8244/configmaps/e2e-watch-test-resource-version 6508dd73-6852-4c62-9106-9409e4000ef7 5192859 0 2020-04-03 23:53:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:53:46.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8244" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":58,"skipped":968,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:53:46.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 3 23:53:46.785: INFO: Waiting up to 5m0s for pod "pod-50c46092-581f-4892-8942-1f2784b1b539" in namespace "emptydir-5288" to be "Succeeded or Failed" Apr 3 23:53:46.807: INFO: Pod "pod-50c46092-581f-4892-8942-1f2784b1b539": Phase="Pending", Reason="", readiness=false. Elapsed: 22.069213ms Apr 3 23:53:48.811: INFO: Pod "pod-50c46092-581f-4892-8942-1f2784b1b539": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026246352s Apr 3 23:53:50.815: INFO: Pod "pod-50c46092-581f-4892-8942-1f2784b1b539": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029561322s STEP: Saw pod success Apr 3 23:53:50.815: INFO: Pod "pod-50c46092-581f-4892-8942-1f2784b1b539" satisfied condition "Succeeded or Failed" Apr 3 23:53:50.817: INFO: Trying to get logs from node latest-worker pod pod-50c46092-581f-4892-8942-1f2784b1b539 container test-container: STEP: delete the pod Apr 3 23:53:50.850: INFO: Waiting for pod pod-50c46092-581f-4892-8942-1f2784b1b539 to disappear Apr 3 23:53:50.859: INFO: Pod pod-50c46092-581f-4892-8942-1f2784b1b539 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:53:50.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5288" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":975,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:53:50.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:53:57.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7962" for this suite. STEP: Destroying namespace "nsdeletetest-9592" for this suite. Apr 3 23:53:57.110: INFO: Namespace nsdeletetest-9592 was already deleted STEP: Destroying namespace "nsdeletetest-2626" for this suite. • [SLOW TEST:6.248 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":60,"skipped":979,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:53:57.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 23:53:57.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f" in namespace "projected-186" to be "Succeeded or Failed" Apr 3 23:53:57.182: INFO: Pod "downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865613ms Apr 3 23:53:59.186: INFO: Pod "downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007467831s Apr 3 23:54:01.190: INFO: Pod "downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011838148s STEP: Saw pod success Apr 3 23:54:01.190: INFO: Pod "downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f" satisfied condition "Succeeded or Failed" Apr 3 23:54:01.194: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f container client-container: STEP: delete the pod Apr 3 23:54:01.216: INFO: Waiting for pod downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f to disappear Apr 3 23:54:01.233: INFO: Pod downwardapi-volume-78391a74-6146-4b7a-baa9-048c2345f14f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:54:01.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-186" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:54:01.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:54:01.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:54:03.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554841, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554841, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554841, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554841, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:54:06.837: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:54:06.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9740-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:54:08.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-715" for this suite. STEP: Destroying namespace "webhook-715-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.021 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":62,"skipped":1022,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:54:08.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-6l6q STEP: Creating a pod to test atomic-volume-subpath Apr 3 23:54:08.377: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6l6q" in namespace "subpath-4744" to be "Succeeded or Failed" Apr 3 23:54:08.397: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Pending", Reason="", readiness=false. Elapsed: 19.241638ms Apr 3 23:54:10.424: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046212955s Apr 3 23:54:12.427: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 4.050060471s Apr 3 23:54:14.454: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 6.076980583s Apr 3 23:54:16.457: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 8.080096794s Apr 3 23:54:18.461: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 10.083148484s Apr 3 23:54:20.465: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 12.087234247s Apr 3 23:54:22.469: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 14.091973234s Apr 3 23:54:24.474: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 16.096387715s Apr 3 23:54:26.478: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 18.100415883s Apr 3 23:54:28.481: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 20.104135896s Apr 3 23:54:30.486: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Running", Reason="", readiness=true. Elapsed: 22.108233922s Apr 3 23:54:32.490: INFO: Pod "pod-subpath-test-secret-6l6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.112429105s STEP: Saw pod success Apr 3 23:54:32.490: INFO: Pod "pod-subpath-test-secret-6l6q" satisfied condition "Succeeded or Failed" Apr 3 23:54:32.493: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-6l6q container test-container-subpath-secret-6l6q: STEP: delete the pod Apr 3 23:54:32.535: INFO: Waiting for pod pod-subpath-test-secret-6l6q to disappear Apr 3 23:54:32.587: INFO: Pod pod-subpath-test-secret-6l6q no longer exists STEP: Deleting pod pod-subpath-test-secret-6l6q Apr 3 23:54:32.587: INFO: Deleting pod "pod-subpath-test-secret-6l6q" in namespace "subpath-4744" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:54:32.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4744" for this suite. • [SLOW TEST:24.344 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":63,"skipped":1027,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:54:32.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 3 23:54:32.690: INFO: Waiting up to 5m0s for pod "pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2" in namespace "emptydir-8086" to be "Succeeded or Failed" Apr 3 23:54:32.760: INFO: Pod "pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2": Phase="Pending", Reason="", readiness=false. Elapsed: 69.711175ms Apr 3 23:54:34.772: INFO: Pod "pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08173558s Apr 3 23:54:36.796: INFO: Pod "pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105828291s STEP: Saw pod success Apr 3 23:54:36.796: INFO: Pod "pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2" satisfied condition "Succeeded or Failed" Apr 3 23:54:36.799: INFO: Trying to get logs from node latest-worker2 pod pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2 container test-container: STEP: delete the pod Apr 3 23:54:36.844: INFO: Waiting for pod pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2 to disappear Apr 3 23:54:36.850: INFO: Pod pod-66e11bf5-2125-4d06-9b6d-404cd01cf7c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:54:36.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8086" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1052,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:54:36.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:54:37.590: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:54:39.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554877, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554877, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554877, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721554877, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:54:42.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 3 23:54:46.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-4148 to-be-attached-pod -i -c=container1' Apr 3 23:54:46.812: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:54:46.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4148" for this suite. STEP: Destroying namespace "webhook-4148-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.039 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":65,"skipped":1055,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:54:46.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-017e4410-3857-49e9-8a51-ff3a15cd1384 in namespace container-probe-6250 Apr 3 23:54:51.010: INFO: Started pod liveness-017e4410-3857-49e9-8a51-ff3a15cd1384 in namespace container-probe-6250 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 23:54:51.013: INFO: Initial restart count of pod liveness-017e4410-3857-49e9-8a51-ff3a15cd1384 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:58:51.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6250" for this suite. • [SLOW TEST:244.695 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:58:51.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 3 23:58:51.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc" in namespace "downward-api-2531" to be "Succeeded or Failed" Apr 3 23:58:51.908: INFO: Pod "downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.00682ms Apr 3 23:58:53.912: INFO: Pod "downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012799492s Apr 3 23:58:55.916: INFO: Pod "downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016850553s STEP: Saw pod success Apr 3 23:58:55.916: INFO: Pod "downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc" satisfied condition "Succeeded or Failed" Apr 3 23:58:55.919: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc container client-container: STEP: delete the pod Apr 3 23:58:55.992: INFO: Waiting for pod downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc to disappear Apr 3 23:58:56.016: INFO: Pod downwardapi-volume-ff58e10e-6d12-4e91-827b-a7cbcea7bfdc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:58:56.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2531" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1105,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:58:56.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:58:56.187: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.508174ms) Apr 3 23:58:56.191: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.977254ms) Apr 3 23:58:56.194: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.047083ms) Apr 3 23:58:56.197: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.088541ms) Apr 3 23:58:56.201: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.53712ms) Apr 3 23:58:56.205: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.721367ms) Apr 3 23:58:56.208: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.602494ms) Apr 3 23:58:56.212: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.618612ms) Apr 3 23:58:56.216: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.414462ms) Apr 3 23:58:56.219: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.752168ms) Apr 3 23:58:56.223: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.15216ms) Apr 3 23:58:56.226: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.882315ms) Apr 3 23:58:56.230: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.536881ms) Apr 3 23:58:56.234: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.775721ms) Apr 3 23:58:56.238: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.730583ms) Apr 3 23:58:56.241: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.659153ms) Apr 3 23:58:56.245: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.037883ms) Apr 3 23:58:56.249: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.785959ms) Apr 3 23:58:56.253: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.901446ms) Apr 3 23:58:56.257: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.843846ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:58:56.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7780" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":68,"skipped":1115,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:58:56.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 3 23:58:56.349: INFO: Waiting up to 5m0s for pod "downward-api-0443656d-3157-497f-9e1d-58c3e739f268" in namespace "downward-api-6562" to be "Succeeded or Failed" Apr 3 23:58:56.364: INFO: Pod "downward-api-0443656d-3157-497f-9e1d-58c3e739f268": Phase="Pending", Reason="", readiness=false. Elapsed: 15.034972ms Apr 3 23:58:58.368: INFO: Pod "downward-api-0443656d-3157-497f-9e1d-58c3e739f268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018202938s Apr 3 23:59:00.372: INFO: Pod "downward-api-0443656d-3157-497f-9e1d-58c3e739f268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022498916s STEP: Saw pod success Apr 3 23:59:00.372: INFO: Pod "downward-api-0443656d-3157-497f-9e1d-58c3e739f268" satisfied condition "Succeeded or Failed" Apr 3 23:59:00.375: INFO: Trying to get logs from node latest-worker2 pod downward-api-0443656d-3157-497f-9e1d-58c3e739f268 container dapi-container: STEP: delete the pod Apr 3 23:59:00.407: INFO: Waiting for pod downward-api-0443656d-3157-497f-9e1d-58c3e739f268 to disappear Apr 3 23:59:00.411: INFO: Pod downward-api-0443656d-3157-497f-9e1d-58c3e739f268 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:59:00.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6562" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1116,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:59:00.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-a4998533-adda-4025-9eea-a5c671ab858f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:59:00.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9305" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":70,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:59:00.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 3 23:59:00.660: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6a4ca57f-51cb-47c6-a606-41da03cba24a", Controller:(*bool)(0xc003d6953a), BlockOwnerDeletion:(*bool)(0xc003d6953b)}} Apr 3 23:59:00.675: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"96667d51-4944-41ed-8d66-2e78353e0b2e", Controller:(*bool)(0xc003d696ea), BlockOwnerDeletion:(*bool)(0xc003d696eb)}} Apr 3 23:59:00.747: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"67cfb63d-32e3-4fcc-a831-dfb9f7fa896b", Controller:(*bool)(0xc003b33f3a), BlockOwnerDeletion:(*bool)(0xc003b33f3b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:59:05.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3732" for this suite. • [SLOW TEST:5.306 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":71,"skipped":1188,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:59:05.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 3 23:59:05.874: INFO: >>> kubeConfig: /root/.kube/config Apr 3 23:59:08.784: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:59:19.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4937" for this suite. • [SLOW TEST:13.482 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":72,"skipped":1192,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:59:19.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 3 23:59:49.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9652" for this suite. • [SLOW TEST:30.090 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":73,"skipped":1193,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 3 23:59:49.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 3 23:59:49.796: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 3 23:59:51.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555189, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555189, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555189, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555189, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 3 23:59:54.844: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:05.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8993" for this suite. STEP: Destroying namespace "webhook-8993-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.806 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":74,"skipped":1196,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:05.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-12b6a916-43af-4c51-a4ea-0b59eab913d5 STEP: Creating a pod to test consume secrets Apr 4 00:00:05.270: INFO: Waiting up to 5m0s for pod "pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332" in namespace "secrets-2922" to be "Succeeded or Failed" Apr 4 00:00:05.291: INFO: Pod "pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332": Phase="Pending", Reason="", readiness=false. Elapsed: 21.726833ms Apr 4 00:00:07.296: INFO: Pod "pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026320538s Apr 4 00:00:09.301: INFO: Pod "pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030781448s STEP: Saw pod success Apr 4 00:00:09.301: INFO: Pod "pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332" satisfied condition "Succeeded or Failed" Apr 4 00:00:09.304: INFO: Trying to get logs from node latest-worker pod pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332 container secret-env-test: STEP: delete the pod Apr 4 00:00:09.336: INFO: Waiting for pod pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332 to disappear Apr 4 00:00:09.341: INFO: Pod pod-secrets-6e746c2b-397d-4fcd-a697-c415af8b0332 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:09.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2922" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:09.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 4 00:00:09.402: INFO: Waiting up to 5m0s for pod "pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9" in namespace "emptydir-7880" to be "Succeeded or Failed" Apr 4 00:00:09.453: INFO: Pod "pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9": Phase="Pending", Reason="", readiness=false. Elapsed: 50.971247ms Apr 4 00:00:11.457: INFO: Pod "pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055107938s Apr 4 00:00:13.462: INFO: Pod "pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059413758s STEP: Saw pod success Apr 4 00:00:13.462: INFO: Pod "pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9" satisfied condition "Succeeded or Failed" Apr 4 00:00:13.464: INFO: Trying to get logs from node latest-worker pod pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9 container test-container: STEP: delete the pod Apr 4 00:00:13.494: INFO: Waiting for pod pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9 to disappear Apr 4 00:00:13.519: INFO: Pod pod-e75967fb-2520-48ea-a05d-bcdddf3d8df9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:13.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7880" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1238,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:13.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 4 00:00:13.584: INFO: Waiting up to 5m0s for pod "downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38" in namespace "downward-api-5705" to be "Succeeded or Failed" Apr 4 00:00:13.587: INFO: Pod "downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752768ms Apr 4 00:00:15.589: INFO: Pod "downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00560451s Apr 4 00:00:17.593: INFO: Pod "downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009533206s STEP: Saw pod success Apr 4 00:00:17.593: INFO: Pod "downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38" satisfied condition "Succeeded or Failed" Apr 4 00:00:17.597: INFO: Trying to get logs from node latest-worker2 pod downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38 container dapi-container: STEP: delete the pod Apr 4 00:00:17.619: INFO: Waiting for pod downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38 to disappear Apr 4 00:00:17.622: INFO: Pod downward-api-c603cbda-d0da-49d3-a658-58f19d4b2c38 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:17.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5705" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:17.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:00:17.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2" in namespace "projected-81" to be "Succeeded or Failed" Apr 4 00:00:17.706: INFO: Pod "downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.676479ms Apr 4 00:00:19.710: INFO: Pod "downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020344664s Apr 4 00:00:21.714: INFO: Pod "downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024307537s STEP: Saw pod success Apr 4 00:00:21.714: INFO: Pod "downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2" satisfied condition "Succeeded or Failed" Apr 4 00:00:21.716: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2 container client-container: STEP: delete the pod Apr 4 00:00:21.744: INFO: Waiting for pod downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2 to disappear Apr 4 00:00:21.754: INFO: Pod downwardapi-volume-9e8f102a-70b5-4d25-b370-22fec2e645a2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:21.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-81" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1268,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:21.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-41098ab4-333c-4690-b0b0-2554599d6318 STEP: Creating a pod to test consume configMaps Apr 4 00:00:21.840: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d" in namespace "configmap-6073" to be "Succeeded or Failed" Apr 4 00:00:21.844: INFO: Pod "pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846415ms Apr 4 00:00:23.848: INFO: Pod "pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007988644s Apr 4 00:00:25.852: INFO: Pod "pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011752738s STEP: Saw pod success Apr 4 00:00:25.852: INFO: Pod "pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d" satisfied condition "Succeeded or Failed" Apr 4 00:00:25.855: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d container configmap-volume-test: STEP: delete the pod Apr 4 00:00:25.902: INFO: Waiting for pod pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d to disappear Apr 4 00:00:25.917: INFO: Pod pod-configmaps-0b2c1846-e891-4c90-8dc3-613eae6bb67d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:25.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6073" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1280,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:25.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 4 00:00:26.020: INFO: Waiting up to 5m0s for pod "pod-470412b5-823b-4978-be5a-a55fee733b9c" in namespace "emptydir-3110" to be "Succeeded or Failed" Apr 4 00:00:26.024: INFO: Pod "pod-470412b5-823b-4978-be5a-a55fee733b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.730308ms Apr 4 00:00:28.028: INFO: Pod "pod-470412b5-823b-4978-be5a-a55fee733b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007886522s Apr 4 00:00:30.032: INFO: Pod "pod-470412b5-823b-4978-be5a-a55fee733b9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011426245s STEP: Saw pod success Apr 4 00:00:30.032: INFO: Pod "pod-470412b5-823b-4978-be5a-a55fee733b9c" satisfied condition "Succeeded or Failed" Apr 4 00:00:30.034: INFO: Trying to get logs from node latest-worker pod pod-470412b5-823b-4978-be5a-a55fee733b9c container test-container: STEP: delete the pod Apr 4 00:00:30.107: INFO: Waiting for pod pod-470412b5-823b-4978-be5a-a55fee733b9c to disappear Apr 4 00:00:30.114: INFO: Pod pod-470412b5-823b-4978-be5a-a55fee733b9c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:30.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3110" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1285,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:30.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:00:30.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4" in namespace "projected-411" to be "Succeeded or Failed" Apr 4 00:00:30.311: INFO: Pod "downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327748ms Apr 4 00:00:32.315: INFO: Pod "downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006620046s Apr 4 00:00:34.319: INFO: Pod "downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010645602s STEP: Saw pod success Apr 4 00:00:34.319: INFO: Pod "downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4" satisfied condition "Succeeded or Failed" Apr 4 00:00:34.322: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4 container client-container: STEP: delete the pod Apr 4 00:00:34.387: INFO: Waiting for pod downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4 to disappear Apr 4 00:00:34.407: INFO: Pod downwardapi-volume-0e55eed5-478c-456c-abf4-82bc5be529c4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:34.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-411" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1307,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:34.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:34.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6152" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":82,"skipped":1313,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:34.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Apr 4 00:00:34.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 4 00:00:34.655: INFO: stderr: "" Apr 4 00:00:34.655: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:34.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2320" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":83,"skipped":1315,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:34.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Apr 4 00:00:34.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 4 00:00:34.947: INFO: stderr: "" Apr 4 00:00:34.947: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:34.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7316" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":84,"skipped":1331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:34.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 4 00:00:39.043: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4885 PodName:pod-sharedvolume-051a4f3c-e6f8-42b2-a6a8-4058472f9c1a ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:00:39.043: INFO: >>> kubeConfig: /root/.kube/config I0404 00:00:39.080517 7 log.go:172] (0xc00277f290) (0xc0010fb0e0) Create stream I0404 00:00:39.080548 7 log.go:172] (0xc00277f290) (0xc0010fb0e0) Stream added, broadcasting: 1 I0404 00:00:39.082875 7 log.go:172] (0xc00277f290) Reply frame received for 1 I0404 00:00:39.082917 7 log.go:172] (0xc00277f290) (0xc0018b5ae0) Create stream I0404 00:00:39.082925 7 log.go:172] (0xc00277f290) (0xc0018b5ae0) Stream added, broadcasting: 3 I0404 00:00:39.083831 7 log.go:172] (0xc00277f290) Reply frame received for 3 I0404 00:00:39.083882 7 log.go:172] (0xc00277f290) (0xc0018b5b80) Create stream I0404 00:00:39.083898 7 log.go:172] (0xc00277f290) (0xc0018b5b80) Stream added, broadcasting: 5 I0404 00:00:39.084760 7 log.go:172] (0xc00277f290) Reply frame received for 5 I0404 00:00:39.163711 7 log.go:172] (0xc00277f290) Data frame received for 3 I0404 00:00:39.163751 7 log.go:172] (0xc0018b5ae0) (3) Data frame handling I0404 00:00:39.163762 7 log.go:172] (0xc0018b5ae0) (3) Data frame sent I0404 00:00:39.163828 7 log.go:172] (0xc00277f290) Data frame received for 3 I0404 00:00:39.163871 7 log.go:172] (0xc0018b5ae0) (3) Data frame handling I0404 00:00:39.163898 7 log.go:172] (0xc00277f290) Data frame received for 5 I0404 00:00:39.163912 7 log.go:172] (0xc0018b5b80) (5) Data frame handling I0404 00:00:39.165284 7 log.go:172] (0xc00277f290) Data frame received for 1 I0404 00:00:39.165305 7 log.go:172] (0xc0010fb0e0) (1) Data frame handling I0404 00:00:39.165319 7 log.go:172] (0xc0010fb0e0) (1) Data frame sent I0404 00:00:39.165333 7 log.go:172] (0xc00277f290) (0xc0010fb0e0) Stream removed, broadcasting: 1 I0404 00:00:39.165425 7 log.go:172] (0xc00277f290) (0xc0010fb0e0) Stream removed, broadcasting: 1 I0404 00:00:39.165445 7 log.go:172] (0xc00277f290) (0xc0018b5ae0) Stream removed, broadcasting: 3 I0404 00:00:39.165544 7 log.go:172] (0xc00277f290) Go away received I0404 00:00:39.165589 7 log.go:172] (0xc00277f290) (0xc0018b5b80) Stream removed, broadcasting: 5 Apr 4 00:00:39.165: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:39.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4885" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":85,"skipped":1374,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:39.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 4 00:00:39.289: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:39.293: INFO: Number of nodes with available pods: 0 Apr 4 00:00:39.293: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:00:40.299: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:40.302: INFO: Number of nodes with available pods: 0 Apr 4 00:00:40.303: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:00:41.419: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:41.422: INFO: Number of nodes with available pods: 0 Apr 4 00:00:41.422: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:00:42.298: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:42.302: INFO: Number of nodes with available pods: 1 Apr 4 00:00:42.302: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 00:00:43.298: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:43.301: INFO: Number of nodes with available pods: 2 Apr 4 00:00:43.301: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 4 00:00:43.313: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:43.318: INFO: Number of nodes with available pods: 1 Apr 4 00:00:43.318: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 00:00:44.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:44.342: INFO: Number of nodes with available pods: 1 Apr 4 00:00:44.343: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 00:00:45.332: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:45.335: INFO: Number of nodes with available pods: 1 Apr 4 00:00:45.335: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 00:00:46.324: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:46.327: INFO: Number of nodes with available pods: 1 Apr 4 00:00:46.327: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 00:00:47.324: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:00:47.327: INFO: Number of nodes with available pods: 2 Apr 4 00:00:47.327: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-214, will wait for the garbage collector to delete the pods Apr 4 00:00:47.391: INFO: Deleting DaemonSet.extensions daemon-set took: 5.974778ms Apr 4 00:00:47.691: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.262357ms Apr 4 00:00:53.178: INFO: Number of nodes with available pods: 0 Apr 4 00:00:53.178: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 00:00:53.185: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-214/daemonsets","resourceVersion":"5194944"},"items":null} Apr 4 00:00:53.188: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-214/pods","resourceVersion":"5194944"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:53.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-214" for this suite. • [SLOW TEST:14.031 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":86,"skipped":1377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:53.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:00:53.318: INFO: Creating deployment "test-recreate-deployment" Apr 4 00:00:53.322: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 4 00:00:53.330: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 4 00:00:55.337: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 4 00:00:55.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555253, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555253, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555253, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555253, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:00:57.343: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 4 00:00:57.349: INFO: Updating deployment test-recreate-deployment Apr 4 00:00:57.349: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 4 00:00:57.760: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4330 /apis/apps/v1/namespaces/deployment-4330/deployments/test-recreate-deployment 826644a7-cbe3-4e7f-b826-ca21c3e929ef 5195001 2 2020-04-04 00:00:53 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003402278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-04 00:00:57 +0000 UTC,LastTransitionTime:2020-04-04 00:00:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-04 00:00:57 +0000 UTC,LastTransitionTime:2020-04-04 00:00:53 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 4 00:00:57.800: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4330 /apis/apps/v1/namespaces/deployment-4330/replicasets/test-recreate-deployment-5f94c574ff 373e5e20-3182-4a64-a508-b4f9037a6933 5194998 1 2020-04-04 00:00:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 826644a7-cbe3-4e7f-b826-ca21c3e929ef 0xc003402687 0xc003402688}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034026e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:00:57.800: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 4 00:00:57.800: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-4330 /apis/apps/v1/namespaces/deployment-4330/replicasets/test-recreate-deployment-846c7dd955 2f1f1df4-221a-407a-a84b-9e93b6eeaaca 5194990 2 2020-04-04 00:00:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 826644a7-cbe3-4e7f-b826-ca21c3e929ef 0xc003402757 0xc003402758}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034027c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:00:57.804: INFO: Pod "test-recreate-deployment-5f94c574ff-hlf6d" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-hlf6d test-recreate-deployment-5f94c574ff- deployment-4330 /api/v1/namespaces/deployment-4330/pods/test-recreate-deployment-5f94c574ff-hlf6d 82da8d79-afd5-4f09-97f0-dddf4f58b188 5194997 0 2020-04-04 00:00:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 373e5e20-3182-4a64-a508-b4f9037a6933 0xc003427c57 0xc003427c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4zgzx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4zgzx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4zgzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:00:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:00:57.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4330" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":87,"skipped":1408,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:00:57.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:00:58.031: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c0904542-9d45-43ca-a95e-4059ac45ce53" in namespace "security-context-test-4067" to be "Succeeded or Failed" Apr 4 00:00:58.045: INFO: Pod "busybox-readonly-false-c0904542-9d45-43ca-a95e-4059ac45ce53": Phase="Pending", Reason="", readiness=false. Elapsed: 13.632877ms Apr 4 00:01:00.056: INFO: Pod "busybox-readonly-false-c0904542-9d45-43ca-a95e-4059ac45ce53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024939597s Apr 4 00:01:02.061: INFO: Pod "busybox-readonly-false-c0904542-9d45-43ca-a95e-4059ac45ce53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029935002s Apr 4 00:01:02.061: INFO: Pod "busybox-readonly-false-c0904542-9d45-43ca-a95e-4059ac45ce53" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:01:02.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4067" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:01:02.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7338 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 4 00:01:02.188: INFO: Found 0 stateful pods, waiting for 3 Apr 4 00:01:12.193: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:01:12.193: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:01:12.193: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 4 00:01:22.192: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:01:22.192: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:01:22.192: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:01:22.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7338 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:01:22.466: INFO: stderr: "I0404 00:01:22.336511 917 log.go:172] (0xc00095c2c0) (0xc0005d1540) Create stream\nI0404 00:01:22.336597 917 log.go:172] (0xc00095c2c0) (0xc0005d1540) Stream added, broadcasting: 1\nI0404 00:01:22.339157 917 log.go:172] (0xc00095c2c0) Reply frame received for 1\nI0404 00:01:22.339215 917 log.go:172] (0xc00095c2c0) (0xc0003e6000) Create stream\nI0404 00:01:22.339231 917 log.go:172] (0xc00095c2c0) (0xc0003e6000) Stream added, broadcasting: 3\nI0404 00:01:22.340238 917 log.go:172] (0xc00095c2c0) Reply frame received for 3\nI0404 00:01:22.340280 917 log.go:172] (0xc00095c2c0) (0xc00029f540) Create stream\nI0404 00:01:22.340296 917 log.go:172] (0xc00095c2c0) (0xc00029f540) Stream added, broadcasting: 5\nI0404 00:01:22.341466 917 log.go:172] (0xc00095c2c0) Reply frame received for 5\nI0404 00:01:22.427015 917 log.go:172] (0xc00095c2c0) Data frame received for 5\nI0404 00:01:22.427036 917 log.go:172] (0xc00029f540) (5) Data frame handling\nI0404 00:01:22.427047 917 log.go:172] (0xc00029f540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:01:22.460126 917 log.go:172] (0xc00095c2c0) Data frame received for 3\nI0404 00:01:22.460172 917 log.go:172] (0xc00095c2c0) Data frame received for 5\nI0404 00:01:22.460200 917 log.go:172] (0xc00029f540) (5) Data frame handling\nI0404 00:01:22.460221 917 log.go:172] (0xc0003e6000) (3) Data frame handling\nI0404 00:01:22.460235 917 log.go:172] (0xc0003e6000) (3) Data frame sent\nI0404 00:01:22.460244 917 log.go:172] (0xc00095c2c0) Data frame received for 3\nI0404 00:01:22.460255 917 log.go:172] (0xc0003e6000) (3) Data frame handling\nI0404 00:01:22.462167 917 log.go:172] (0xc00095c2c0) Data frame received for 1\nI0404 00:01:22.462183 917 log.go:172] (0xc0005d1540) (1) Data frame handling\nI0404 00:01:22.462200 917 log.go:172] (0xc0005d1540) (1) Data frame sent\nI0404 00:01:22.462467 917 log.go:172] (0xc00095c2c0) (0xc0005d1540) Stream removed, broadcasting: 1\nI0404 00:01:22.462531 917 log.go:172] (0xc00095c2c0) Go away received\nI0404 00:01:22.462810 917 log.go:172] (0xc00095c2c0) (0xc0005d1540) Stream removed, broadcasting: 1\nI0404 00:01:22.462824 917 log.go:172] (0xc00095c2c0) (0xc0003e6000) Stream removed, broadcasting: 3\nI0404 00:01:22.462834 917 log.go:172] (0xc00095c2c0) (0xc00029f540) Stream removed, broadcasting: 5\n" Apr 4 00:01:22.466: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:01:22.466: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 4 00:01:32.498: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 4 00:01:42.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7338 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:01:45.326: INFO: stderr: "I0404 00:01:45.243604 937 log.go:172] (0xc000d00000) (0xc00082d220) Create stream\nI0404 00:01:45.243678 937 log.go:172] (0xc000d00000) (0xc00082d220) Stream added, broadcasting: 1\nI0404 00:01:45.247813 937 log.go:172] (0xc000d00000) Reply frame received for 1\nI0404 00:01:45.247853 937 log.go:172] (0xc000d00000) (0xc0007900a0) Create stream\nI0404 00:01:45.247866 937 log.go:172] (0xc000d00000) (0xc0007900a0) Stream added, broadcasting: 3\nI0404 00:01:45.248852 937 log.go:172] (0xc000d00000) Reply frame received for 3\nI0404 00:01:45.248893 937 log.go:172] (0xc000d00000) (0xc00082d400) Create stream\nI0404 00:01:45.248904 937 log.go:172] (0xc000d00000) (0xc00082d400) Stream added, broadcasting: 5\nI0404 00:01:45.250375 937 log.go:172] (0xc000d00000) Reply frame received for 5\nI0404 00:01:45.317904 937 log.go:172] (0xc000d00000) Data frame received for 3\nI0404 00:01:45.317948 937 log.go:172] (0xc0007900a0) (3) Data frame handling\nI0404 00:01:45.317970 937 log.go:172] (0xc0007900a0) (3) Data frame sent\nI0404 00:01:45.317982 937 log.go:172] (0xc000d00000) Data frame received for 3\nI0404 00:01:45.318001 937 log.go:172] (0xc0007900a0) (3) Data frame handling\nI0404 00:01:45.318097 937 log.go:172] (0xc000d00000) Data frame received for 5\nI0404 00:01:45.318122 937 log.go:172] (0xc00082d400) (5) Data frame handling\nI0404 00:01:45.318141 937 log.go:172] (0xc00082d400) (5) Data frame sent\nI0404 00:01:45.318155 937 log.go:172] (0xc000d00000) Data frame received for 5\nI0404 00:01:45.318166 937 log.go:172] (0xc00082d400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 00:01:45.319907 937 log.go:172] (0xc000d00000) Data frame received for 1\nI0404 00:01:45.319924 937 log.go:172] (0xc00082d220) (1) Data frame handling\nI0404 00:01:45.319942 937 log.go:172] (0xc00082d220) (1) Data frame sent\nI0404 00:01:45.319952 937 log.go:172] (0xc000d00000) (0xc00082d220) Stream removed, broadcasting: 1\nI0404 00:01:45.320057 937 log.go:172] (0xc000d00000) Go away received\nI0404 00:01:45.320262 937 log.go:172] (0xc000d00000) (0xc00082d220) Stream removed, broadcasting: 1\nI0404 00:01:45.320277 937 log.go:172] (0xc000d00000) (0xc0007900a0) Stream removed, broadcasting: 3\nI0404 00:01:45.320290 937 log.go:172] (0xc000d00000) (0xc00082d400) Stream removed, broadcasting: 5\n" Apr 4 00:01:45.326: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:01:45.326: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:02:05.346: INFO: Waiting for StatefulSet statefulset-7338/ss2 to complete update STEP: Rolling back to a previous revision Apr 4 00:02:15.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7338 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:02:15.616: INFO: stderr: "I0404 00:02:15.478013 975 log.go:172] (0xc0009fc0b0) (0xc0009d8140) Create stream\nI0404 00:02:15.478066 975 log.go:172] (0xc0009fc0b0) (0xc0009d8140) Stream added, broadcasting: 1\nI0404 00:02:15.482879 975 log.go:172] (0xc0009fc0b0) Reply frame received for 1\nI0404 00:02:15.482938 975 log.go:172] (0xc0009fc0b0) (0xc0009d8000) Create stream\nI0404 00:02:15.482970 975 log.go:172] (0xc0009fc0b0) (0xc0009d8000) Stream added, broadcasting: 3\nI0404 00:02:15.483892 975 log.go:172] (0xc0009fc0b0) Reply frame received for 3\nI0404 00:02:15.483944 975 log.go:172] (0xc0009fc0b0) (0xc000a42000) Create stream\nI0404 00:02:15.483959 975 log.go:172] (0xc0009fc0b0) (0xc000a42000) Stream added, broadcasting: 5\nI0404 00:02:15.484817 975 log.go:172] (0xc0009fc0b0) Reply frame received for 5\nI0404 00:02:15.566175 975 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0404 00:02:15.566203 975 log.go:172] (0xc000a42000) (5) Data frame handling\nI0404 00:02:15.566218 975 log.go:172] (0xc000a42000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:02:15.610215 975 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0404 00:02:15.610236 975 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0404 00:02:15.610246 975 log.go:172] (0xc0009d8000) (3) Data frame sent\nI0404 00:02:15.610252 975 log.go:172] (0xc0009fc0b0) Data frame received for 3\nI0404 00:02:15.610259 975 log.go:172] (0xc0009d8000) (3) Data frame handling\nI0404 00:02:15.610507 975 log.go:172] (0xc0009fc0b0) Data frame received for 5\nI0404 00:02:15.610530 975 log.go:172] (0xc000a42000) (5) Data frame handling\nI0404 00:02:15.612325 975 log.go:172] (0xc0009fc0b0) Data frame received for 1\nI0404 00:02:15.612349 975 log.go:172] (0xc0009d8140) (1) Data frame handling\nI0404 00:02:15.612368 975 log.go:172] (0xc0009d8140) (1) Data frame sent\nI0404 00:02:15.612386 975 log.go:172] (0xc0009fc0b0) (0xc0009d8140) Stream removed, broadcasting: 1\nI0404 00:02:15.612631 975 log.go:172] (0xc0009fc0b0) Go away received\nI0404 00:02:15.612667 975 log.go:172] (0xc0009fc0b0) (0xc0009d8140) Stream removed, broadcasting: 1\nI0404 00:02:15.612760 975 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc0009d8000), 0x5:(*spdystream.Stream)(0xc000a42000)}\nI0404 00:02:15.612812 975 log.go:172] (0xc0009fc0b0) (0xc0009d8000) Stream removed, broadcasting: 3\nI0404 00:02:15.612828 975 log.go:172] (0xc0009fc0b0) (0xc000a42000) Stream removed, broadcasting: 5\n" Apr 4 00:02:15.616: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:02:15.616: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:02:25.645: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 4 00:02:35.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7338 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:02:35.890: INFO: stderr: "I0404 00:02:35.821243 995 log.go:172] (0xc0009ce000) (0xc0009c6000) Create stream\nI0404 00:02:35.821403 995 log.go:172] (0xc0009ce000) (0xc0009c6000) Stream added, broadcasting: 1\nI0404 00:02:35.824129 995 log.go:172] (0xc0009ce000) Reply frame received for 1\nI0404 00:02:35.824177 995 log.go:172] (0xc0009ce000) (0xc0007072c0) Create stream\nI0404 00:02:35.824196 995 log.go:172] (0xc0009ce000) (0xc0007072c0) Stream added, broadcasting: 3\nI0404 00:02:35.825320 995 log.go:172] (0xc0009ce000) Reply frame received for 3\nI0404 00:02:35.825425 995 log.go:172] (0xc0009ce000) (0xc0005f6000) Create stream\nI0404 00:02:35.825443 995 log.go:172] (0xc0009ce000) (0xc0005f6000) Stream added, broadcasting: 5\nI0404 00:02:35.826314 995 log.go:172] (0xc0009ce000) Reply frame received for 5\nI0404 00:02:35.884326 995 log.go:172] (0xc0009ce000) Data frame received for 5\nI0404 00:02:35.884381 995 log.go:172] (0xc0005f6000) (5) Data frame handling\nI0404 00:02:35.884400 995 log.go:172] (0xc0005f6000) (5) Data frame sent\nI0404 00:02:35.884415 995 log.go:172] (0xc0009ce000) Data frame received for 5\nI0404 00:02:35.884426 995 log.go:172] (0xc0005f6000) (5) Data frame handling\nI0404 00:02:35.884443 995 log.go:172] (0xc0009ce000) Data frame received for 3\nI0404 00:02:35.884464 995 log.go:172] (0xc0007072c0) (3) Data frame handling\nI0404 00:02:35.884488 995 log.go:172] (0xc0007072c0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 00:02:35.884501 995 log.go:172] (0xc0009ce000) Data frame received for 3\nI0404 00:02:35.884521 995 log.go:172] (0xc0007072c0) (3) Data frame handling\nI0404 00:02:35.885688 995 log.go:172] (0xc0009ce000) Data frame received for 1\nI0404 00:02:35.885711 995 log.go:172] (0xc0009c6000) (1) Data frame handling\nI0404 00:02:35.885720 995 log.go:172] (0xc0009c6000) (1) Data frame sent\nI0404 00:02:35.885730 995 log.go:172] (0xc0009ce000) (0xc0009c6000) Stream removed, broadcasting: 1\nI0404 00:02:35.885745 995 log.go:172] (0xc0009ce000) Go away received\nI0404 00:02:35.886108 995 log.go:172] (0xc0009ce000) (0xc0009c6000) Stream removed, broadcasting: 1\nI0404 00:02:35.886128 995 log.go:172] (0xc0009ce000) (0xc0007072c0) Stream removed, broadcasting: 3\nI0404 00:02:35.886138 995 log.go:172] (0xc0009ce000) (0xc0005f6000) Stream removed, broadcasting: 5\n" Apr 4 00:02:35.890: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:02:35.890: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:03:05.914: INFO: Waiting for StatefulSet statefulset-7338/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 00:03:15.922: INFO: Deleting all statefulset in ns statefulset-7338 Apr 4 00:03:15.924: INFO: Scaling statefulset ss2 to 0 Apr 4 00:03:35.975: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:03:35.978: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:03:35.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7338" for this suite. • [SLOW TEST:153.931 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":89,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:03:36.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Apr 4 00:03:36.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2411' Apr 4 00:03:36.361: INFO: stderr: "" Apr 4 00:03:36.361: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 4 00:03:37.365: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:03:37.365: INFO: Found 0 / 1 Apr 4 00:03:38.375: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:03:38.375: INFO: Found 0 / 1 Apr 4 00:03:39.366: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:03:39.366: INFO: Found 1 / 1 Apr 4 00:03:39.366: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 4 00:03:39.369: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:03:39.369: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 00:03:39.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-sgghp --namespace=kubectl-2411 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 4 00:03:39.468: INFO: stderr: "" Apr 4 00:03:39.468: INFO: stdout: "pod/agnhost-master-sgghp patched\n" STEP: checking annotations Apr 4 00:03:39.481: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:03:39.481: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:03:39.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2411" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":90,"skipped":1486,"failed":0} SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:03:39.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:03:39.579: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4230 I0404 00:03:39.600115 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4230, replica count: 1 I0404 00:03:40.650591 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:03:41.650855 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:03:42.651074 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:03:43.651275 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 00:03:43.761: INFO: Created: latency-svc-wnv4t Apr 4 00:03:43.782: INFO: Got endpoints: latency-svc-wnv4t [31.134554ms] Apr 4 00:03:43.817: INFO: Created: latency-svc-f2jzm Apr 4 00:03:43.827: INFO: Got endpoints: latency-svc-f2jzm [44.82179ms] Apr 4 00:03:43.888: INFO: Created: latency-svc-l2txx Apr 4 00:03:43.930: INFO: Got endpoints: latency-svc-l2txx [147.79462ms] Apr 4 00:03:43.931: INFO: Created: latency-svc-hthpl Apr 4 00:03:43.947: INFO: Got endpoints: latency-svc-hthpl [164.333474ms] Apr 4 00:03:43.972: INFO: Created: latency-svc-h6ttm Apr 4 00:03:43.986: INFO: Got endpoints: latency-svc-h6ttm [203.850485ms] Apr 4 00:03:44.025: INFO: Created: latency-svc-f4zrl Apr 4 00:03:44.062: INFO: Created: latency-svc-v2hfv Apr 4 00:03:44.062: INFO: Got endpoints: latency-svc-f4zrl [280.111423ms] Apr 4 00:03:44.088: INFO: Got endpoints: latency-svc-v2hfv [306.064937ms] Apr 4 00:03:44.110: INFO: Created: latency-svc-9dj65 Apr 4 00:03:44.124: INFO: Got endpoints: latency-svc-9dj65 [341.484971ms] Apr 4 00:03:44.163: INFO: Created: latency-svc-jb7tk Apr 4 00:03:44.182: INFO: Got endpoints: latency-svc-jb7tk [399.973243ms] Apr 4 00:03:44.183: INFO: Created: latency-svc-dg4s4 Apr 4 00:03:44.196: INFO: Got endpoints: latency-svc-dg4s4 [413.851967ms] Apr 4 00:03:44.218: INFO: Created: latency-svc-n45dq Apr 4 00:03:44.232: INFO: Got endpoints: latency-svc-n45dq [449.230556ms] Apr 4 00:03:44.254: INFO: Created: latency-svc-4jqpw Apr 4 00:03:44.331: INFO: Got endpoints: latency-svc-4jqpw [548.437461ms] Apr 4 00:03:44.350: INFO: Created: latency-svc-46qc2 Apr 4 00:03:44.382: INFO: Got endpoints: latency-svc-46qc2 [599.145138ms] Apr 4 00:03:44.416: INFO: Created: latency-svc-94g8b Apr 4 00:03:44.475: INFO: Got endpoints: latency-svc-94g8b [691.980074ms] Apr 4 00:03:44.477: INFO: Created: latency-svc-ghs4j Apr 4 00:03:44.494: INFO: Got endpoints: latency-svc-ghs4j [711.509665ms] Apr 4 00:03:44.494: INFO: Created: latency-svc-mmr7k Apr 4 00:03:44.504: INFO: Got endpoints: latency-svc-mmr7k [721.028113ms] Apr 4 00:03:44.518: INFO: Created: latency-svc-q6wq8 Apr 4 00:03:44.527: INFO: Got endpoints: latency-svc-q6wq8 [700.100788ms] Apr 4 00:03:44.548: INFO: Created: latency-svc-l2m7s Apr 4 00:03:44.564: INFO: Got endpoints: latency-svc-l2m7s [633.615748ms] Apr 4 00:03:44.608: INFO: Created: latency-svc-d6qjm Apr 4 00:03:44.632: INFO: Got endpoints: latency-svc-d6qjm [684.851155ms] Apr 4 00:03:44.632: INFO: Created: latency-svc-xhzll Apr 4 00:03:44.656: INFO: Got endpoints: latency-svc-xhzll [669.493721ms] Apr 4 00:03:44.686: INFO: Created: latency-svc-52lxk Apr 4 00:03:44.699: INFO: Got endpoints: latency-svc-52lxk [636.541308ms] Apr 4 00:03:44.732: INFO: Created: latency-svc-hflcv Apr 4 00:03:44.758: INFO: Got endpoints: latency-svc-hflcv [669.517607ms] Apr 4 00:03:44.759: INFO: Created: latency-svc-f5znr Apr 4 00:03:44.777: INFO: Got endpoints: latency-svc-f5znr [653.271123ms] Apr 4 00:03:44.794: INFO: Created: latency-svc-kt44d Apr 4 00:03:44.807: INFO: Got endpoints: latency-svc-kt44d [625.032363ms] Apr 4 00:03:44.824: INFO: Created: latency-svc-x7gz6 Apr 4 00:03:44.876: INFO: Got endpoints: latency-svc-x7gz6 [679.68003ms] Apr 4 00:03:44.887: INFO: Created: latency-svc-jw2t2 Apr 4 00:03:44.897: INFO: Got endpoints: latency-svc-jw2t2 [665.161243ms] Apr 4 00:03:44.920: INFO: Created: latency-svc-6bgl6 Apr 4 00:03:44.929: INFO: Got endpoints: latency-svc-6bgl6 [598.081539ms] Apr 4 00:03:44.949: INFO: Created: latency-svc-wzcw8 Apr 4 00:03:44.965: INFO: Got endpoints: latency-svc-wzcw8 [583.415105ms] Apr 4 00:03:45.046: INFO: Created: latency-svc-sggph Apr 4 00:03:45.061: INFO: Got endpoints: latency-svc-sggph [586.184344ms] Apr 4 00:03:45.088: INFO: Created: latency-svc-kknqh Apr 4 00:03:45.097: INFO: Got endpoints: latency-svc-kknqh [603.011916ms] Apr 4 00:03:45.165: INFO: Created: latency-svc-2hxf8 Apr 4 00:03:45.190: INFO: Created: latency-svc-gd4xv Apr 4 00:03:45.190: INFO: Got endpoints: latency-svc-2hxf8 [686.429113ms] Apr 4 00:03:45.205: INFO: Got endpoints: latency-svc-gd4xv [677.730803ms] Apr 4 00:03:45.226: INFO: Created: latency-svc-kbmtq Apr 4 00:03:45.241: INFO: Got endpoints: latency-svc-kbmtq [677.393121ms] Apr 4 00:03:45.262: INFO: Created: latency-svc-jq67g Apr 4 00:03:45.289: INFO: Got endpoints: latency-svc-jq67g [657.626806ms] Apr 4 00:03:45.304: INFO: Created: latency-svc-xzw4r Apr 4 00:03:45.316: INFO: Got endpoints: latency-svc-xzw4r [660.426548ms] Apr 4 00:03:45.334: INFO: Created: latency-svc-4pkng Apr 4 00:03:45.359: INFO: Got endpoints: latency-svc-4pkng [659.632239ms] Apr 4 00:03:45.425: INFO: Created: latency-svc-zrk8p Apr 4 00:03:45.436: INFO: Created: latency-svc-x4xr5 Apr 4 00:03:45.436: INFO: Got endpoints: latency-svc-zrk8p [677.877311ms] Apr 4 00:03:45.448: INFO: Got endpoints: latency-svc-x4xr5 [670.700779ms] Apr 4 00:03:45.672: INFO: Created: latency-svc-vsr4p Apr 4 00:03:45.682: INFO: Got endpoints: latency-svc-vsr4p [874.528189ms] Apr 4 00:03:45.712: INFO: Created: latency-svc-n6xn2 Apr 4 00:03:45.724: INFO: Got endpoints: latency-svc-n6xn2 [847.900862ms] Apr 4 00:03:45.754: INFO: Created: latency-svc-pjxnf Apr 4 00:03:45.768: INFO: Got endpoints: latency-svc-pjxnf [870.773756ms] Apr 4 00:03:45.804: INFO: Created: latency-svc-pqsqj Apr 4 00:03:45.810: INFO: Got endpoints: latency-svc-pqsqj [881.045631ms] Apr 4 00:03:45.838: INFO: Created: latency-svc-4f5lw Apr 4 00:03:45.852: INFO: Got endpoints: latency-svc-4f5lw [886.634066ms] Apr 4 00:03:45.875: INFO: Created: latency-svc-j2ghr Apr 4 00:03:45.887: INFO: Got endpoints: latency-svc-j2ghr [826.533019ms] Apr 4 00:03:45.930: INFO: Created: latency-svc-ntl9s Apr 4 00:03:45.936: INFO: Got endpoints: latency-svc-ntl9s [838.770888ms] Apr 4 00:03:45.976: INFO: Created: latency-svc-d2tpw Apr 4 00:03:46.017: INFO: Got endpoints: latency-svc-d2tpw [826.997068ms] Apr 4 00:03:46.224: INFO: Created: latency-svc-5nspm Apr 4 00:03:46.276: INFO: Got endpoints: latency-svc-5nspm [1.07092458s] Apr 4 00:03:46.277: INFO: Created: latency-svc-mbtmp Apr 4 00:03:46.293: INFO: Got endpoints: latency-svc-mbtmp [1.051615344s] Apr 4 00:03:46.384: INFO: Created: latency-svc-7qnw6 Apr 4 00:03:46.401: INFO: Got endpoints: latency-svc-7qnw6 [1.111576124s] Apr 4 00:03:46.426: INFO: Created: latency-svc-z7rdx Apr 4 00:03:46.437: INFO: Got endpoints: latency-svc-z7rdx [1.120262706s] Apr 4 00:03:46.516: INFO: Created: latency-svc-6qh9q Apr 4 00:03:46.685: INFO: Got endpoints: latency-svc-6qh9q [1.326071513s] Apr 4 00:03:46.690: INFO: Created: latency-svc-gx4wg Apr 4 00:03:46.708: INFO: Got endpoints: latency-svc-gx4wg [1.272059609s] Apr 4 00:03:46.732: INFO: Created: latency-svc-gkmhg Apr 4 00:03:46.749: INFO: Got endpoints: latency-svc-gkmhg [1.300466116s] Apr 4 00:03:46.769: INFO: Created: latency-svc-k8w8n Apr 4 00:03:46.780: INFO: Got endpoints: latency-svc-k8w8n [1.097902776s] Apr 4 00:03:46.828: INFO: Created: latency-svc-frnjh Apr 4 00:03:46.835: INFO: Got endpoints: latency-svc-frnjh [1.11128514s] Apr 4 00:03:46.876: INFO: Created: latency-svc-pqpbj Apr 4 00:03:46.978: INFO: Got endpoints: latency-svc-pqpbj [1.210222424s] Apr 4 00:03:46.997: INFO: Created: latency-svc-p7vlf Apr 4 00:03:47.026: INFO: Got endpoints: latency-svc-p7vlf [1.215569657s] Apr 4 00:03:47.109: INFO: Created: latency-svc-v6dk4 Apr 4 00:03:47.128: INFO: Got endpoints: latency-svc-v6dk4 [1.2765335s] Apr 4 00:03:47.130: INFO: Created: latency-svc-kdmj4 Apr 4 00:03:47.140: INFO: Got endpoints: latency-svc-kdmj4 [1.25205452s] Apr 4 00:03:47.153: INFO: Created: latency-svc-gccjt Apr 4 00:03:47.183: INFO: Got endpoints: latency-svc-gccjt [1.246341526s] Apr 4 00:03:47.241: INFO: Created: latency-svc-hk86m Apr 4 00:03:47.260: INFO: Got endpoints: latency-svc-hk86m [1.242861695s] Apr 4 00:03:47.261: INFO: Created: latency-svc-bf256 Apr 4 00:03:47.278: INFO: Got endpoints: latency-svc-bf256 [1.002347485s] Apr 4 00:03:47.309: INFO: Created: latency-svc-q2wzk Apr 4 00:03:47.329: INFO: Got endpoints: latency-svc-q2wzk [1.036294346s] Apr 4 00:03:47.369: INFO: Created: latency-svc-wc2dx Apr 4 00:03:47.383: INFO: Got endpoints: latency-svc-wc2dx [982.533819ms] Apr 4 00:03:47.404: INFO: Created: latency-svc-s5vhj Apr 4 00:03:47.434: INFO: Got endpoints: latency-svc-s5vhj [997.500455ms] Apr 4 00:03:47.511: INFO: Created: latency-svc-dm8xs Apr 4 00:03:47.543: INFO: Created: latency-svc-kd6z4 Apr 4 00:03:47.543: INFO: Got endpoints: latency-svc-dm8xs [857.737384ms] Apr 4 00:03:47.553: INFO: Got endpoints: latency-svc-kd6z4 [845.227176ms] Apr 4 00:03:47.573: INFO: Created: latency-svc-8btc2 Apr 4 00:03:47.591: INFO: Got endpoints: latency-svc-8btc2 [841.90266ms] Apr 4 00:03:47.643: INFO: Created: latency-svc-8km8j Apr 4 00:03:47.662: INFO: Got endpoints: latency-svc-8km8j [881.808872ms] Apr 4 00:03:47.662: INFO: Created: latency-svc-vq9td Apr 4 00:03:47.678: INFO: Got endpoints: latency-svc-vq9td [843.180251ms] Apr 4 00:03:47.705: INFO: Created: latency-svc-qq2nn Apr 4 00:03:47.729: INFO: Got endpoints: latency-svc-qq2nn [751.050003ms] Apr 4 00:03:47.781: INFO: Created: latency-svc-gps67 Apr 4 00:03:47.800: INFO: Created: latency-svc-hjgkq Apr 4 00:03:47.800: INFO: Got endpoints: latency-svc-gps67 [774.201433ms] Apr 4 00:03:47.817: INFO: Got endpoints: latency-svc-hjgkq [688.179285ms] Apr 4 00:03:47.842: INFO: Created: latency-svc-qfm5d Apr 4 00:03:47.866: INFO: Got endpoints: latency-svc-qfm5d [726.494668ms] Apr 4 00:03:47.930: INFO: Created: latency-svc-7hjgr Apr 4 00:03:47.944: INFO: Created: latency-svc-pq6nc Apr 4 00:03:47.945: INFO: Got endpoints: latency-svc-7hjgr [761.957712ms] Apr 4 00:03:47.958: INFO: Got endpoints: latency-svc-pq6nc [697.687876ms] Apr 4 00:03:48.004: INFO: Created: latency-svc-wfxx4 Apr 4 00:03:48.024: INFO: Got endpoints: latency-svc-wfxx4 [745.733737ms] Apr 4 00:03:48.086: INFO: Created: latency-svc-s4x4n Apr 4 00:03:48.106: INFO: Created: latency-svc-pjfxf Apr 4 00:03:48.107: INFO: Got endpoints: latency-svc-s4x4n [777.341332ms] Apr 4 00:03:48.131: INFO: Got endpoints: latency-svc-pjfxf [747.169588ms] Apr 4 00:03:48.155: INFO: Created: latency-svc-v2z7m Apr 4 00:03:48.174: INFO: Got endpoints: latency-svc-v2z7m [739.866425ms] Apr 4 00:03:48.208: INFO: Created: latency-svc-7lmm5 Apr 4 00:03:48.224: INFO: Got endpoints: latency-svc-7lmm5 [681.294231ms] Apr 4 00:03:48.250: INFO: Created: latency-svc-2sg5g Apr 4 00:03:48.272: INFO: Got endpoints: latency-svc-2sg5g [718.936954ms] Apr 4 00:03:48.298: INFO: Created: latency-svc-bg64p Apr 4 00:03:48.331: INFO: Got endpoints: latency-svc-bg64p [740.422251ms] Apr 4 00:03:48.340: INFO: Created: latency-svc-lrv8d Apr 4 00:03:48.356: INFO: Got endpoints: latency-svc-lrv8d [694.099584ms] Apr 4 00:03:48.370: INFO: Created: latency-svc-tkwb5 Apr 4 00:03:48.380: INFO: Got endpoints: latency-svc-tkwb5 [701.420676ms] Apr 4 00:03:48.407: INFO: Created: latency-svc-67zld Apr 4 00:03:48.422: INFO: Got endpoints: latency-svc-67zld [692.824613ms] Apr 4 00:03:48.467: INFO: Created: latency-svc-qjqbt Apr 4 00:03:48.482: INFO: Got endpoints: latency-svc-qjqbt [681.908927ms] Apr 4 00:03:48.503: INFO: Created: latency-svc-jjdgc Apr 4 00:03:48.527: INFO: Got endpoints: latency-svc-jjdgc [710.041299ms] Apr 4 00:03:48.552: INFO: Created: latency-svc-lgmxc Apr 4 00:03:48.572: INFO: Got endpoints: latency-svc-lgmxc [705.446112ms] Apr 4 00:03:48.581: INFO: Created: latency-svc-rh9xg Apr 4 00:03:48.593: INFO: Got endpoints: latency-svc-rh9xg [648.660993ms] Apr 4 00:03:48.616: INFO: Created: latency-svc-lk7wl Apr 4 00:03:48.629: INFO: Got endpoints: latency-svc-lk7wl [671.086401ms] Apr 4 00:03:48.646: INFO: Created: latency-svc-5x7q9 Apr 4 00:03:48.659: INFO: Got endpoints: latency-svc-5x7q9 [635.161835ms] Apr 4 00:03:48.697: INFO: Created: latency-svc-w58h8 Apr 4 00:03:48.719: INFO: Got endpoints: latency-svc-w58h8 [612.256116ms] Apr 4 00:03:48.719: INFO: Created: latency-svc-ctpf8 Apr 4 00:03:48.761: INFO: Got endpoints: latency-svc-ctpf8 [630.107199ms] Apr 4 00:03:48.866: INFO: Created: latency-svc-jpct7 Apr 4 00:03:48.893: INFO: Got endpoints: latency-svc-jpct7 [719.124165ms] Apr 4 00:03:48.893: INFO: Created: latency-svc-q664p Apr 4 00:03:48.913: INFO: Got endpoints: latency-svc-q664p [689.326662ms] Apr 4 00:03:48.928: INFO: Created: latency-svc-l97q9 Apr 4 00:03:48.937: INFO: Got endpoints: latency-svc-l97q9 [664.956968ms] Apr 4 00:03:48.953: INFO: Created: latency-svc-8zhlx Apr 4 00:03:48.961: INFO: Got endpoints: latency-svc-8zhlx [629.916367ms] Apr 4 00:03:48.995: INFO: Created: latency-svc-cfvkw Apr 4 00:03:49.003: INFO: Got endpoints: latency-svc-cfvkw [646.820203ms] Apr 4 00:03:49.024: INFO: Created: latency-svc-hmsm5 Apr 4 00:03:49.039: INFO: Got endpoints: latency-svc-hmsm5 [658.961443ms] Apr 4 00:03:49.060: INFO: Created: latency-svc-cnmbk Apr 4 00:03:49.073: INFO: Got endpoints: latency-svc-cnmbk [650.692996ms] Apr 4 00:03:49.139: INFO: Created: latency-svc-mx6kz Apr 4 00:03:49.156: INFO: Got endpoints: latency-svc-mx6kz [674.193585ms] Apr 4 00:03:49.157: INFO: Created: latency-svc-4d8h9 Apr 4 00:03:49.180: INFO: Got endpoints: latency-svc-4d8h9 [653.223746ms] Apr 4 00:03:49.205: INFO: Created: latency-svc-rxqwf Apr 4 00:03:49.217: INFO: Got endpoints: latency-svc-rxqwf [644.929341ms] Apr 4 00:03:49.234: INFO: Created: latency-svc-nqkfp Apr 4 00:03:49.265: INFO: Got endpoints: latency-svc-nqkfp [671.535736ms] Apr 4 00:03:49.266: INFO: Created: latency-svc-nfmr7 Apr 4 00:03:49.276: INFO: Got endpoints: latency-svc-nfmr7 [646.951052ms] Apr 4 00:03:49.300: INFO: Created: latency-svc-bnzcs Apr 4 00:03:49.313: INFO: Got endpoints: latency-svc-bnzcs [653.200875ms] Apr 4 00:03:49.355: INFO: Created: latency-svc-q4wg5 Apr 4 00:03:49.396: INFO: Got endpoints: latency-svc-q4wg5 [677.120128ms] Apr 4 00:03:49.426: INFO: Created: latency-svc-mgvb6 Apr 4 00:03:49.440: INFO: Got endpoints: latency-svc-mgvb6 [679.453582ms] Apr 4 00:03:49.475: INFO: Created: latency-svc-npq46 Apr 4 00:03:49.534: INFO: Got endpoints: latency-svc-npq46 [641.090898ms] Apr 4 00:03:49.537: INFO: Created: latency-svc-qwnrp Apr 4 00:03:49.548: INFO: Got endpoints: latency-svc-qwnrp [634.498512ms] Apr 4 00:03:49.564: INFO: Created: latency-svc-zf7cv Apr 4 00:03:49.578: INFO: Got endpoints: latency-svc-zf7cv [641.105577ms] Apr 4 00:03:49.601: INFO: Created: latency-svc-l8zh4 Apr 4 00:03:49.614: INFO: Got endpoints: latency-svc-l8zh4 [653.447057ms] Apr 4 00:03:49.666: INFO: Created: latency-svc-c7b9w Apr 4 00:03:49.678: INFO: Got endpoints: latency-svc-c7b9w [675.11616ms] Apr 4 00:03:49.703: INFO: Created: latency-svc-cw7qs Apr 4 00:03:49.720: INFO: Got endpoints: latency-svc-cw7qs [681.27599ms] Apr 4 00:03:49.738: INFO: Created: latency-svc-kjjbf Apr 4 00:03:49.756: INFO: Got endpoints: latency-svc-kjjbf [683.001ms] Apr 4 00:03:49.792: INFO: Created: latency-svc-dktwd Apr 4 00:03:49.798: INFO: Got endpoints: latency-svc-dktwd [641.656266ms] Apr 4 00:03:49.828: INFO: Created: latency-svc-k4j4q Apr 4 00:03:49.840: INFO: Got endpoints: latency-svc-k4j4q [659.711771ms] Apr 4 00:03:49.858: INFO: Created: latency-svc-t6r6b Apr 4 00:03:49.930: INFO: Got endpoints: latency-svc-t6r6b [712.963598ms] Apr 4 00:03:49.931: INFO: Created: latency-svc-gmnrm Apr 4 00:03:49.973: INFO: Got endpoints: latency-svc-gmnrm [707.840906ms] Apr 4 00:03:50.021: INFO: Created: latency-svc-w9dlb Apr 4 00:03:50.050: INFO: Got endpoints: latency-svc-w9dlb [773.298654ms] Apr 4 00:03:50.068: INFO: Created: latency-svc-chzc5 Apr 4 00:03:50.094: INFO: Got endpoints: latency-svc-chzc5 [780.721102ms] Apr 4 00:03:50.122: INFO: Created: latency-svc-bwjfh Apr 4 00:03:50.147: INFO: Got endpoints: latency-svc-bwjfh [750.705559ms] Apr 4 00:03:50.187: INFO: Created: latency-svc-q6znm Apr 4 00:03:50.195: INFO: Got endpoints: latency-svc-q6znm [754.701195ms] Apr 4 00:03:50.212: INFO: Created: latency-svc-p4zc5 Apr 4 00:03:50.231: INFO: Got endpoints: latency-svc-p4zc5 [696.811386ms] Apr 4 00:03:50.248: INFO: Created: latency-svc-hf4d7 Apr 4 00:03:50.261: INFO: Got endpoints: latency-svc-hf4d7 [713.349143ms] Apr 4 00:03:50.278: INFO: Created: latency-svc-rpm6q Apr 4 00:03:50.307: INFO: Got endpoints: latency-svc-rpm6q [728.321849ms] Apr 4 00:03:50.338: INFO: Created: latency-svc-jh7k6 Apr 4 00:03:50.352: INFO: Got endpoints: latency-svc-jh7k6 [737.056367ms] Apr 4 00:03:50.368: INFO: Created: latency-svc-dsmff Apr 4 00:03:50.385: INFO: Got endpoints: latency-svc-dsmff [706.995789ms] Apr 4 00:03:50.404: INFO: Created: latency-svc-ntzxj Apr 4 00:03:50.433: INFO: Got endpoints: latency-svc-ntzxj [712.448689ms] Apr 4 00:03:50.440: INFO: Created: latency-svc-jzt7z Apr 4 00:03:50.469: INFO: Got endpoints: latency-svc-jzt7z [713.052005ms] Apr 4 00:03:50.488: INFO: Created: latency-svc-mlclf Apr 4 00:03:50.505: INFO: Got endpoints: latency-svc-mlclf [706.821779ms] Apr 4 00:03:50.518: INFO: Created: latency-svc-bcj6k Apr 4 00:03:50.529: INFO: Got endpoints: latency-svc-bcj6k [689.08503ms] Apr 4 00:03:50.577: INFO: Created: latency-svc-fjv87 Apr 4 00:03:50.596: INFO: Got endpoints: latency-svc-fjv87 [666.751087ms] Apr 4 00:03:50.596: INFO: Created: latency-svc-6xmjr Apr 4 00:03:50.607: INFO: Got endpoints: latency-svc-6xmjr [634.022178ms] Apr 4 00:03:50.627: INFO: Created: latency-svc-7nj7q Apr 4 00:03:50.638: INFO: Got endpoints: latency-svc-7nj7q [588.822496ms] Apr 4 00:03:50.656: INFO: Created: latency-svc-bcdgr Apr 4 00:03:50.668: INFO: Got endpoints: latency-svc-bcdgr [574.836601ms] Apr 4 00:03:50.702: INFO: Created: latency-svc-s4tnh Apr 4 00:03:50.722: INFO: Created: latency-svc-s9lgg Apr 4 00:03:50.722: INFO: Got endpoints: latency-svc-s4tnh [575.256341ms] Apr 4 00:03:50.734: INFO: Got endpoints: latency-svc-s9lgg [539.185839ms] Apr 4 00:03:50.759: INFO: Created: latency-svc-l9svw Apr 4 00:03:50.770: INFO: Got endpoints: latency-svc-l9svw [538.905514ms] Apr 4 00:03:50.794: INFO: Created: latency-svc-q6ctq Apr 4 00:03:50.834: INFO: Got endpoints: latency-svc-q6ctq [572.393283ms] Apr 4 00:03:50.855: INFO: Created: latency-svc-kgmrr Apr 4 00:03:50.874: INFO: Got endpoints: latency-svc-kgmrr [566.863703ms] Apr 4 00:03:50.903: INFO: Created: latency-svc-n59k5 Apr 4 00:03:50.932: INFO: Got endpoints: latency-svc-n59k5 [580.793575ms] Apr 4 00:03:50.984: INFO: Created: latency-svc-lrjtc Apr 4 00:03:51.016: INFO: Created: latency-svc-6lds8 Apr 4 00:03:51.016: INFO: Got endpoints: latency-svc-lrjtc [630.837322ms] Apr 4 00:03:51.026: INFO: Got endpoints: latency-svc-6lds8 [593.388455ms] Apr 4 00:03:51.040: INFO: Created: latency-svc-hnfjj Apr 4 00:03:51.058: INFO: Got endpoints: latency-svc-hnfjj [588.942331ms] Apr 4 00:03:51.083: INFO: Created: latency-svc-l4pnr Apr 4 00:03:51.121: INFO: Got endpoints: latency-svc-l4pnr [615.938371ms] Apr 4 00:03:51.130: INFO: Created: latency-svc-jjlhp Apr 4 00:03:51.147: INFO: Got endpoints: latency-svc-jjlhp [617.591ms] Apr 4 00:03:51.166: INFO: Created: latency-svc-krwmf Apr 4 00:03:51.178: INFO: Got endpoints: latency-svc-krwmf [581.594223ms] Apr 4 00:03:51.196: INFO: Created: latency-svc-gc574 Apr 4 00:03:51.208: INFO: Got endpoints: latency-svc-gc574 [601.02236ms] Apr 4 00:03:51.259: INFO: Created: latency-svc-7czg2 Apr 4 00:03:51.280: INFO: Created: latency-svc-cndbc Apr 4 00:03:51.280: INFO: Got endpoints: latency-svc-7czg2 [641.600824ms] Apr 4 00:03:51.292: INFO: Got endpoints: latency-svc-cndbc [623.403619ms] Apr 4 00:03:51.310: INFO: Created: latency-svc-45hwl Apr 4 00:03:51.322: INFO: Got endpoints: latency-svc-45hwl [599.63315ms] Apr 4 00:03:51.340: INFO: Created: latency-svc-2jlhk Apr 4 00:03:51.352: INFO: Got endpoints: latency-svc-2jlhk [617.543575ms] Apr 4 00:03:51.410: INFO: Created: latency-svc-tdzbm Apr 4 00:03:51.431: INFO: Got endpoints: latency-svc-tdzbm [660.225028ms] Apr 4 00:03:51.431: INFO: Created: latency-svc-dpw26 Apr 4 00:03:51.484: INFO: Got endpoints: latency-svc-dpw26 [649.96697ms] Apr 4 00:03:51.565: INFO: Created: latency-svc-fzx5w Apr 4 00:03:51.586: INFO: Created: latency-svc-88d7t Apr 4 00:03:51.586: INFO: Got endpoints: latency-svc-fzx5w [712.455285ms] Apr 4 00:03:51.610: INFO: Got endpoints: latency-svc-88d7t [677.562331ms] Apr 4 00:03:51.640: INFO: Created: latency-svc-t4xjl Apr 4 00:03:51.655: INFO: Got endpoints: latency-svc-t4xjl [639.475108ms] Apr 4 00:03:51.750: INFO: Created: latency-svc-nmcfh Apr 4 00:03:51.778: INFO: Got endpoints: latency-svc-nmcfh [751.690418ms] Apr 4 00:03:51.779: INFO: Created: latency-svc-zjtdb Apr 4 00:03:51.797: INFO: Got endpoints: latency-svc-zjtdb [738.726396ms] Apr 4 00:03:51.826: INFO: Created: latency-svc-2kntg Apr 4 00:03:51.858: INFO: Got endpoints: latency-svc-2kntg [736.688464ms] Apr 4 00:03:51.874: INFO: Created: latency-svc-q9wgr Apr 4 00:03:51.885: INFO: Got endpoints: latency-svc-q9wgr [738.249799ms] Apr 4 00:03:51.898: INFO: Created: latency-svc-bq2hx Apr 4 00:03:51.921: INFO: Got endpoints: latency-svc-bq2hx [743.357346ms] Apr 4 00:03:51.939: INFO: Created: latency-svc-6fh7g Apr 4 00:03:51.957: INFO: Got endpoints: latency-svc-6fh7g [749.156722ms] Apr 4 00:03:52.019: INFO: Created: latency-svc-tgzr6 Apr 4 00:03:52.042: INFO: Created: latency-svc-tkm2w Apr 4 00:03:52.042: INFO: Got endpoints: latency-svc-tgzr6 [761.654539ms] Apr 4 00:03:52.072: INFO: Got endpoints: latency-svc-tkm2w [779.704789ms] Apr 4 00:03:52.096: INFO: Created: latency-svc-2pshh Apr 4 00:03:52.107: INFO: Got endpoints: latency-svc-2pshh [784.960498ms] Apr 4 00:03:52.144: INFO: Created: latency-svc-4skw6 Apr 4 00:03:52.161: INFO: Got endpoints: latency-svc-4skw6 [808.984777ms] Apr 4 00:03:52.186: INFO: Created: latency-svc-5425z Apr 4 00:03:52.210: INFO: Got endpoints: latency-svc-5425z [779.122975ms] Apr 4 00:03:52.227: INFO: Created: latency-svc-pzfpt Apr 4 00:03:52.253: INFO: Got endpoints: latency-svc-pzfpt [768.989207ms] Apr 4 00:03:52.264: INFO: Created: latency-svc-f28mq Apr 4 00:03:52.272: INFO: Got endpoints: latency-svc-f28mq [685.980034ms] Apr 4 00:03:52.294: INFO: Created: latency-svc-wkbvc Apr 4 00:03:52.324: INFO: Got endpoints: latency-svc-wkbvc [713.777686ms] Apr 4 00:03:52.379: INFO: Created: latency-svc-jkscc Apr 4 00:03:52.402: INFO: Got endpoints: latency-svc-jkscc [746.771425ms] Apr 4 00:03:52.402: INFO: Created: latency-svc-58fj5 Apr 4 00:03:52.422: INFO: Got endpoints: latency-svc-58fj5 [644.297133ms] Apr 4 00:03:52.443: INFO: Created: latency-svc-j5xdt Apr 4 00:03:52.474: INFO: Got endpoints: latency-svc-j5xdt [676.743192ms] Apr 4 00:03:52.528: INFO: Created: latency-svc-pt5vl Apr 4 00:03:52.532: INFO: Got endpoints: latency-svc-pt5vl [674.078137ms] Apr 4 00:03:52.552: INFO: Created: latency-svc-wsqwh Apr 4 00:03:52.568: INFO: Got endpoints: latency-svc-wsqwh [683.07097ms] Apr 4 00:03:52.582: INFO: Created: latency-svc-6x5sr Apr 4 00:03:52.592: INFO: Got endpoints: latency-svc-6x5sr [670.684561ms] Apr 4 00:03:52.606: INFO: Created: latency-svc-dwpfr Apr 4 00:03:52.616: INFO: Got endpoints: latency-svc-dwpfr [659.361621ms] Apr 4 00:03:52.654: INFO: Created: latency-svc-h7g5z Apr 4 00:03:52.678: INFO: Created: latency-svc-l5ztk Apr 4 00:03:52.678: INFO: Got endpoints: latency-svc-h7g5z [635.991025ms] Apr 4 00:03:52.695: INFO: Got endpoints: latency-svc-l5ztk [623.582261ms] Apr 4 00:03:52.720: INFO: Created: latency-svc-647gl Apr 4 00:03:52.728: INFO: Got endpoints: latency-svc-647gl [620.730221ms] Apr 4 00:03:52.744: INFO: Created: latency-svc-nm5nd Apr 4 00:03:52.752: INFO: Got endpoints: latency-svc-nm5nd [590.69511ms] Apr 4 00:03:52.786: INFO: Created: latency-svc-s86n8 Apr 4 00:03:52.804: INFO: Got endpoints: latency-svc-s86n8 [593.906269ms] Apr 4 00:03:52.834: INFO: Created: latency-svc-nmdvs Apr 4 00:03:52.854: INFO: Got endpoints: latency-svc-nmdvs [601.296476ms] Apr 4 00:03:52.881: INFO: Created: latency-svc-knd9q Apr 4 00:03:52.930: INFO: Got endpoints: latency-svc-knd9q [657.424903ms] Apr 4 00:03:52.931: INFO: Created: latency-svc-ffzwk Apr 4 00:03:52.937: INFO: Got endpoints: latency-svc-ffzwk [613.620016ms] Apr 4 00:03:52.971: INFO: Created: latency-svc-xlbqw Apr 4 00:03:52.985: INFO: Got endpoints: latency-svc-xlbqw [583.083988ms] Apr 4 00:03:53.014: INFO: Created: latency-svc-ngz52 Apr 4 00:03:53.055: INFO: Got endpoints: latency-svc-ngz52 [632.856572ms] Apr 4 00:03:53.086: INFO: Created: latency-svc-klnmd Apr 4 00:03:53.102: INFO: Got endpoints: latency-svc-klnmd [628.116183ms] Apr 4 00:03:53.121: INFO: Created: latency-svc-hbrtm Apr 4 00:03:53.151: INFO: Got endpoints: latency-svc-hbrtm [619.164147ms] Apr 4 00:03:53.199: INFO: Created: latency-svc-zdm92 Apr 4 00:03:53.215: INFO: Got endpoints: latency-svc-zdm92 [647.245738ms] Apr 4 00:03:53.241: INFO: Created: latency-svc-jzjkj Apr 4 00:03:53.257: INFO: Got endpoints: latency-svc-jzjkj [665.147672ms] Apr 4 00:03:53.277: INFO: Created: latency-svc-9swf7 Apr 4 00:03:53.313: INFO: Got endpoints: latency-svc-9swf7 [696.426227ms] Apr 4 00:03:53.319: INFO: Created: latency-svc-bwc9t Apr 4 00:03:53.335: INFO: Got endpoints: latency-svc-bwc9t [657.234772ms] Apr 4 00:03:53.356: INFO: Created: latency-svc-rg5qp Apr 4 00:03:53.369: INFO: Got endpoints: latency-svc-rg5qp [673.503696ms] Apr 4 00:03:53.398: INFO: Created: latency-svc-s56w7 Apr 4 00:03:53.411: INFO: Got endpoints: latency-svc-s56w7 [683.219694ms] Apr 4 00:03:53.463: INFO: Created: latency-svc-4fg4x Apr 4 00:03:53.487: INFO: Created: latency-svc-gr8s5 Apr 4 00:03:53.488: INFO: Got endpoints: latency-svc-4fg4x [735.860075ms] Apr 4 00:03:53.501: INFO: Got endpoints: latency-svc-gr8s5 [696.738761ms] Apr 4 00:03:53.536: INFO: Created: latency-svc-h2hs9 Apr 4 00:03:53.548: INFO: Got endpoints: latency-svc-h2hs9 [694.296688ms] Apr 4 00:03:53.549: INFO: Latencies: [44.82179ms 147.79462ms 164.333474ms 203.850485ms 280.111423ms 306.064937ms 341.484971ms 399.973243ms 413.851967ms 449.230556ms 538.905514ms 539.185839ms 548.437461ms 566.863703ms 572.393283ms 574.836601ms 575.256341ms 580.793575ms 581.594223ms 583.083988ms 583.415105ms 586.184344ms 588.822496ms 588.942331ms 590.69511ms 593.388455ms 593.906269ms 598.081539ms 599.145138ms 599.63315ms 601.02236ms 601.296476ms 603.011916ms 612.256116ms 613.620016ms 615.938371ms 617.543575ms 617.591ms 619.164147ms 620.730221ms 623.403619ms 623.582261ms 625.032363ms 628.116183ms 629.916367ms 630.107199ms 630.837322ms 632.856572ms 633.615748ms 634.022178ms 634.498512ms 635.161835ms 635.991025ms 636.541308ms 639.475108ms 641.090898ms 641.105577ms 641.600824ms 641.656266ms 644.297133ms 644.929341ms 646.820203ms 646.951052ms 647.245738ms 648.660993ms 649.96697ms 650.692996ms 653.200875ms 653.223746ms 653.271123ms 653.447057ms 657.234772ms 657.424903ms 657.626806ms 658.961443ms 659.361621ms 659.632239ms 659.711771ms 660.225028ms 660.426548ms 664.956968ms 665.147672ms 665.161243ms 666.751087ms 669.493721ms 669.517607ms 670.684561ms 670.700779ms 671.086401ms 671.535736ms 673.503696ms 674.078137ms 674.193585ms 675.11616ms 676.743192ms 677.120128ms 677.393121ms 677.562331ms 677.730803ms 677.877311ms 679.453582ms 679.68003ms 681.27599ms 681.294231ms 681.908927ms 683.001ms 683.07097ms 683.219694ms 684.851155ms 685.980034ms 686.429113ms 688.179285ms 689.08503ms 689.326662ms 691.980074ms 692.824613ms 694.099584ms 694.296688ms 696.426227ms 696.738761ms 696.811386ms 697.687876ms 700.100788ms 701.420676ms 705.446112ms 706.821779ms 706.995789ms 707.840906ms 710.041299ms 711.509665ms 712.448689ms 712.455285ms 712.963598ms 713.052005ms 713.349143ms 713.777686ms 718.936954ms 719.124165ms 721.028113ms 726.494668ms 728.321849ms 735.860075ms 736.688464ms 737.056367ms 738.249799ms 738.726396ms 739.866425ms 740.422251ms 743.357346ms 745.733737ms 746.771425ms 747.169588ms 749.156722ms 750.705559ms 751.050003ms 751.690418ms 754.701195ms 761.654539ms 761.957712ms 768.989207ms 773.298654ms 774.201433ms 777.341332ms 779.122975ms 779.704789ms 780.721102ms 784.960498ms 808.984777ms 826.533019ms 826.997068ms 838.770888ms 841.90266ms 843.180251ms 845.227176ms 847.900862ms 857.737384ms 870.773756ms 874.528189ms 881.045631ms 881.808872ms 886.634066ms 982.533819ms 997.500455ms 1.002347485s 1.036294346s 1.051615344s 1.07092458s 1.097902776s 1.11128514s 1.111576124s 1.120262706s 1.210222424s 1.215569657s 1.242861695s 1.246341526s 1.25205452s 1.272059609s 1.2765335s 1.300466116s 1.326071513s] Apr 4 00:03:53.549: INFO: 50 %ile: 679.453582ms Apr 4 00:03:53.549: INFO: 90 %ile: 886.634066ms Apr 4 00:03:53.549: INFO: 99 %ile: 1.300466116s Apr 4 00:03:53.549: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:03:53.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4230" for this suite. • [SLOW TEST:14.070 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":91,"skipped":1494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:03:53.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 4 00:03:54.124: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 4 00:03:56.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555434, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555434, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555434, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555434, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:03:59.164: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:03:59.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:04:00.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3178" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.162 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":92,"skipped":1521,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:04:00.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 4 00:04:00.912: INFO: Waiting up to 5m0s for pod "downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25" in namespace "downward-api-7920" to be "Succeeded or Failed" Apr 4 00:04:00.922: INFO: Pod "downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.788946ms Apr 4 00:04:03.070: INFO: Pod "downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158334374s Apr 4 00:04:05.125: INFO: Pod "downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.21384291s STEP: Saw pod success Apr 4 00:04:05.126: INFO: Pod "downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25" satisfied condition "Succeeded or Failed" Apr 4 00:04:05.270: INFO: Trying to get logs from node latest-worker2 pod downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25 container dapi-container: STEP: delete the pod Apr 4 00:04:05.425: INFO: Waiting for pod downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25 to disappear Apr 4 00:04:05.430: INFO: Pod downward-api-320a9d2b-4763-4a15-bfe5-a02f59d7ea25 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:04:05.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7920" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1542,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:04:05.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 00:04:05.701: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 00:04:05.823: INFO: Waiting for terminating namespaces to be deleted... Apr 4 00:04:05.919: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 00:04:05.946: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:04:05.946: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 00:04:05.946: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:04:05.946: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 00:04:05.946: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 00:04:05.951: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:04:05.951: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 00:04:05.951: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:04:05.951: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5152d597-fead-418b-84bd-45b0dd60e45f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-5152d597-fead-418b-84bd-45b0dd60e45f off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5152d597-fead-418b-84bd-45b0dd60e45f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:04:22.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7995" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.883 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":94,"skipped":1551,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:04:22.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 00:04:25.517: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:04:25.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1522" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1553,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:04:25.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Apr 4 00:04:29.896: INFO: Pod pod-hostip-3798a7d9-7d76-4396-92c5-ad6065f875e0 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:04:29.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-388" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1564,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:04:29.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 4 00:04:29.976: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 4 00:04:40.397: INFO: >>> kubeConfig: /root/.kube/config Apr 4 00:04:43.302: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:04:54.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-988" for this suite. • [SLOW TEST:24.954 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":97,"skipped":1565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:04:54.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:04:54.896: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 4 00:05:18.959: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:05:19.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3421" for this suite. • [SLOW TEST:25.120 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":98,"skipped":1604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:05:19.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-28c7f750-4fdc-4c94-9674-c49829c5d1f2 STEP: Creating a pod to test consume configMaps Apr 4 00:05:20.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3" in namespace "projected-6898" to be "Succeeded or Failed" Apr 4 00:05:20.260: INFO: Pod "pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 126.511653ms Apr 4 00:05:22.284: INFO: Pod "pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150103221s Apr 4 00:05:24.287: INFO: Pod "pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152914139s STEP: Saw pod success Apr 4 00:05:24.287: INFO: Pod "pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3" satisfied condition "Succeeded or Failed" Apr 4 00:05:24.290: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3 container projected-configmap-volume-test: STEP: delete the pod Apr 4 00:05:24.321: INFO: Waiting for pod pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3 to disappear Apr 4 00:05:24.350: INFO: Pod pod-projected-configmaps-cd71f2ed-3d60-4d55-9310-896a96776ec3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:05:24.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6898" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1638,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:05:24.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 4 00:05:28.981: INFO: Successfully updated pod "labelsupdate4c74dddc-d379-469e-a7f0-749c3d245d80" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:05:33.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9223" for this suite. • [SLOW TEST:8.699 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1650,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:05:33.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-37999679-cc3e-48fb-bb29-dfa4e6b1b8b3 STEP: Creating a pod to test consume configMaps Apr 4 00:05:33.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9" in namespace "configmap-9277" to be "Succeeded or Failed" Apr 4 00:05:33.173: INFO: Pod "pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.143452ms Apr 4 00:05:35.177: INFO: Pod "pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019086821s Apr 4 00:05:37.181: INFO: Pod "pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022983515s STEP: Saw pod success Apr 4 00:05:37.181: INFO: Pod "pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9" satisfied condition "Succeeded or Failed" Apr 4 00:05:37.184: INFO: Trying to get logs from node latest-worker pod pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9 container configmap-volume-test: STEP: delete the pod Apr 4 00:05:37.217: INFO: Waiting for pod pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9 to disappear Apr 4 00:05:37.221: INFO: Pod pod-configmaps-dbbb0706-c34d-4981-8415-f982f287e0b9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:05:37.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9277" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1650,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:05:37.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:05:37.300: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 4 00:05:37.317: INFO: Number of nodes with available pods: 0 Apr 4 00:05:37.317: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 4 00:05:37.366: INFO: Number of nodes with available pods: 0 Apr 4 00:05:37.366: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:38.392: INFO: Number of nodes with available pods: 0 Apr 4 00:05:38.392: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:39.369: INFO: Number of nodes with available pods: 0 Apr 4 00:05:39.369: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:40.371: INFO: Number of nodes with available pods: 0 Apr 4 00:05:40.371: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:41.370: INFO: Number of nodes with available pods: 1 Apr 4 00:05:41.370: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 4 00:05:41.428: INFO: Number of nodes with available pods: 1 Apr 4 00:05:41.428: INFO: Number of running nodes: 0, number of available pods: 1 Apr 4 00:05:42.444: INFO: Number of nodes with available pods: 0 Apr 4 00:05:42.444: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 4 00:05:42.464: INFO: Number of nodes with available pods: 0 Apr 4 00:05:42.464: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:43.468: INFO: Number of nodes with available pods: 0 Apr 4 00:05:43.468: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:44.467: INFO: Number of nodes with available pods: 0 Apr 4 00:05:44.467: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:45.468: INFO: Number of nodes with available pods: 0 Apr 4 00:05:45.468: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:46.468: INFO: Number of nodes with available pods: 0 Apr 4 00:05:46.468: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:47.476: INFO: Number of nodes with available pods: 0 Apr 4 00:05:47.476: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:48.468: INFO: Number of nodes with available pods: 0 Apr 4 00:05:48.468: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:49.467: INFO: Number of nodes with available pods: 0 Apr 4 00:05:49.467: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:50.468: INFO: Number of nodes with available pods: 0 Apr 4 00:05:50.468: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:51.479: INFO: Number of nodes with available pods: 0 Apr 4 00:05:51.479: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:52.468: INFO: Number of nodes with available pods: 0 Apr 4 00:05:52.468: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:53.471: INFO: Number of nodes with available pods: 0 Apr 4 00:05:53.471: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:54.572: INFO: Number of nodes with available pods: 0 Apr 4 00:05:54.572: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:55.468: INFO: Number of nodes with available pods: 0 Apr 4 00:05:55.468: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:05:56.478: INFO: Number of nodes with available pods: 1 Apr 4 00:05:56.478: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7660, will wait for the garbage collector to delete the pods Apr 4 00:05:56.543: INFO: Deleting DaemonSet.extensions daemon-set took: 6.473868ms Apr 4 00:05:56.844: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.227165ms Apr 4 00:06:02.746: INFO: Number of nodes with available pods: 0 Apr 4 00:06:02.746: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 00:06:02.749: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7660/daemonsets","resourceVersion":"5198219"},"items":null} Apr 4 00:06:02.752: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7660/pods","resourceVersion":"5198219"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:02.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7660" for this suite. • [SLOW TEST:25.560 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":102,"skipped":1655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:02.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:06:02.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1" in namespace "projected-682" to be "Succeeded or Failed" Apr 4 00:06:02.875: INFO: Pod "downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29416ms Apr 4 00:06:04.878: INFO: Pod "downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007884676s Apr 4 00:06:06.884: INFO: Pod "downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01330914s STEP: Saw pod success Apr 4 00:06:06.884: INFO: Pod "downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1" satisfied condition "Succeeded or Failed" Apr 4 00:06:06.886: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1 container client-container: STEP: delete the pod Apr 4 00:06:06.951: INFO: Waiting for pod downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1 to disappear Apr 4 00:06:06.962: INFO: Pod downwardapi-volume-1c883f16-5812-44af-9c40-6a1a5fd07ca1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:06.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-682" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:07.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:06:07.064: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:08.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3709" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":104,"skipped":1712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:08.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-2860 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2860 STEP: Deleting pre-stop pod Apr 4 00:06:21.248: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:21.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2860" for this suite. • [SLOW TEST:13.211 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":105,"skipped":1773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:21.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-4187 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4187 to expose endpoints map[] Apr 4 00:06:21.396: INFO: Get endpoints failed (11.098862ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 4 00:06:22.400: INFO: successfully validated that service multi-endpoint-test in namespace services-4187 exposes endpoints map[] (1.015330048s elapsed) STEP: Creating pod pod1 in namespace services-4187 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4187 to expose endpoints map[pod1:[100]] Apr 4 00:06:25.467: INFO: successfully validated that service multi-endpoint-test in namespace services-4187 exposes endpoints map[pod1:[100]] (3.058522138s elapsed) STEP: Creating pod pod2 in namespace services-4187 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4187 to expose endpoints map[pod1:[100] pod2:[101]] Apr 4 00:06:29.655: INFO: successfully validated that service multi-endpoint-test in namespace services-4187 exposes endpoints map[pod1:[100] pod2:[101]] (4.183399489s elapsed) STEP: Deleting pod pod1 in namespace services-4187 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4187 to expose endpoints map[pod2:[101]] Apr 4 00:06:30.740: INFO: successfully validated that service multi-endpoint-test in namespace services-4187 exposes endpoints map[pod2:[101]] (1.080714785s elapsed) STEP: Deleting pod pod2 in namespace services-4187 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4187 to expose endpoints map[] Apr 4 00:06:31.771: INFO: successfully validated that service multi-endpoint-test in namespace services-4187 exposes endpoints map[] (1.025796186s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:31.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4187" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:10.538 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":106,"skipped":1801,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:31.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 00:06:31.929: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 00:06:31.969: INFO: Waiting for terminating namespaces to be deleted... Apr 4 00:06:31.972: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 00:06:31.977: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.977: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 00:06:31.977: INFO: server from prestop-2860 started at 2020-04-04 00:06:08 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.977: INFO: Container server ready: false, restart count 0 Apr 4 00:06:31.977: INFO: pod2 from services-4187 started at 2020-04-04 00:06:25 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.977: INFO: Container pause ready: true, restart count 0 Apr 4 00:06:31.977: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.977: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 00:06:31.977: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 00:06:31.982: INFO: tester from prestop-2860 started at 2020-04-04 00:06:12 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.982: INFO: Container tester ready: true, restart count 0 Apr 4 00:06:31.982: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.982: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 00:06:31.982: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.982: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 00:06:31.982: INFO: pod1 from services-4187 started at 2020-04-04 00:06:22 +0000 UTC (1 container statuses recorded) Apr 4 00:06:31.982: INFO: Container pause ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5aa58cb9-49cc-46dd-9aed-2061242a21da 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5aa58cb9-49cc-46dd-9aed-2061242a21da off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5aa58cb9-49cc-46dd-9aed-2061242a21da [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:40.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2296" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.288 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":107,"skipped":1804,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:40.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-4456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4456 to expose endpoints map[] Apr 4 00:06:40.279: INFO: Get endpoints failed (56.722459ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 4 00:06:41.283: INFO: successfully validated that service endpoint-test2 in namespace services-4456 exposes endpoints map[] (1.060899481s elapsed) STEP: Creating pod pod1 in namespace services-4456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4456 to expose endpoints map[pod1:[80]] Apr 4 00:06:44.390: INFO: successfully validated that service endpoint-test2 in namespace services-4456 exposes endpoints map[pod1:[80]] (3.099665079s elapsed) STEP: Creating pod pod2 in namespace services-4456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4456 to expose endpoints map[pod1:[80] pod2:[80]] Apr 4 00:06:47.662: INFO: successfully validated that service endpoint-test2 in namespace services-4456 exposes endpoints map[pod1:[80] pod2:[80]] (3.268304717s elapsed) STEP: Deleting pod pod1 in namespace services-4456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4456 to expose endpoints map[pod2:[80]] Apr 4 00:06:47.708: INFO: successfully validated that service endpoint-test2 in namespace services-4456 exposes endpoints map[pod2:[80]] (40.667999ms elapsed) STEP: Deleting pod pod2 in namespace services-4456 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4456 to expose endpoints map[] Apr 4 00:06:47.733: INFO: successfully validated that service endpoint-test2 in namespace services-4456 exposes endpoints map[] (21.076957ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:47.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4456" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:7.651 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":108,"skipped":1810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:47.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:06:48.023: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:52.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5743" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1852,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:52.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:06:52.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8632" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":110,"skipped":1872,"failed":0} ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:06:52.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 4 00:06:56.522: INFO: &Pod{ObjectMeta:{send-events-df073606-a11e-4c23-9947-8d03d15c352b events-8902 /api/v1/namespaces/events-8902/pods/send-events-df073606-a11e-4c23-9947-8d03d15c352b 4e4eb55f-e86a-4a5b-a4a0-c28562da937a 5198663 0 2020-04-04 00:06:52 +0000 UTC map[name:foo time:492948103] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v9rkm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v9rkm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v9rkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:06:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:06:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.253,StartTime:2020-04-04 00:06:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:06:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://91024fdc421223a50cb67d90d22387f5f8d4434658f7297208a933005a16fb1b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 4 00:06:58.528: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 4 00:07:00.533: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:07:00.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8902" for this suite. • [SLOW TEST:8.131 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":111,"skipped":1872,"failed":0} [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:07:00.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:07:04.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7154" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":112,"skipped":1872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:07:04.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-6da41012-81aa-43b2-8eb8-5107b1276c2b STEP: Creating a pod to test consume secrets Apr 4 00:07:05.188: INFO: Waiting up to 5m0s for pod "pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932" in namespace "secrets-8904" to be "Succeeded or Failed" Apr 4 00:07:05.220: INFO: Pod "pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932": Phase="Pending", Reason="", readiness=false. Elapsed: 32.062904ms Apr 4 00:07:07.223: INFO: Pod "pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035115248s Apr 4 00:07:09.238: INFO: Pod "pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049880392s STEP: Saw pod success Apr 4 00:07:09.238: INFO: Pod "pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932" satisfied condition "Succeeded or Failed" Apr 4 00:07:09.241: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932 container secret-volume-test: STEP: delete the pod Apr 4 00:07:09.264: INFO: Waiting for pod pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932 to disappear Apr 4 00:07:09.269: INFO: Pod pod-secrets-13c20b21-a1e6-4bed-b9a0-fe9887d58932 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:07:09.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8904" for this suite. STEP: Destroying namespace "secret-namespace-9040" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1905,"failed":0} SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:07:09.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:07:09.336: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:07:13.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4189" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1908,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:07:13.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:07:13.966: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:07:15.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555633, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555633, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555634, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555633, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:07:19.005: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:07:19.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:07:20.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5742" for this suite. STEP: Destroying namespace "webhook-5742-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.798 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":115,"skipped":1913,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:07:20.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9101.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9101.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9101.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 00:07:26.344: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.347: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.351: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.354: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.363: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.365: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.367: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.370: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:26.374: INFO: Lookups using dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local] Apr 4 00:07:31.379: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.383: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.386: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.389: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.397: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.400: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.403: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.406: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:31.411: INFO: Lookups using dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local] Apr 4 00:07:36.378: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.381: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.384: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.387: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.396: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.399: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.401: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.404: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:36.409: INFO: Lookups using dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local] Apr 4 00:07:41.379: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.383: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.387: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.390: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.414: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.417: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.420: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.423: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:41.429: INFO: Lookups using dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local] Apr 4 00:07:46.378: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.381: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.384: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.386: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.394: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.397: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.400: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.403: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:46.409: INFO: Lookups using dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local] Apr 4 00:07:51.383: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.386: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.389: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.392: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.399: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.402: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.404: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.406: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local from pod dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096: the server could not find the requested resource (get pods dns-test-9a829297-45f2-47b4-93e6-47cb287d5096) Apr 4 00:07:51.412: INFO: Lookups using dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9101.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9101.svc.cluster.local jessie_udp@dns-test-service-2.dns-9101.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9101.svc.cluster.local] Apr 4 00:07:56.421: INFO: DNS probes using dns-9101/dns-test-9a829297-45f2-47b4-93e6-47cb287d5096 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:07:56.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9101" for this suite. • [SLOW TEST:36.636 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":116,"skipped":1925,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:07:56.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 4 00:07:57.021: INFO: Waiting up to 5m0s for pod "pod-346a9463-b58c-48c2-b440-5e2417cdbe75" in namespace "emptydir-1195" to be "Succeeded or Failed" Apr 4 00:07:57.037: INFO: Pod "pod-346a9463-b58c-48c2-b440-5e2417cdbe75": Phase="Pending", Reason="", readiness=false. Elapsed: 15.643045ms Apr 4 00:07:59.040: INFO: Pod "pod-346a9463-b58c-48c2-b440-5e2417cdbe75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019283192s Apr 4 00:08:01.045: INFO: Pod "pod-346a9463-b58c-48c2-b440-5e2417cdbe75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023292726s STEP: Saw pod success Apr 4 00:08:01.045: INFO: Pod "pod-346a9463-b58c-48c2-b440-5e2417cdbe75" satisfied condition "Succeeded or Failed" Apr 4 00:08:01.048: INFO: Trying to get logs from node latest-worker pod pod-346a9463-b58c-48c2-b440-5e2417cdbe75 container test-container: STEP: delete the pod Apr 4 00:08:01.068: INFO: Waiting for pod pod-346a9463-b58c-48c2-b440-5e2417cdbe75 to disappear Apr 4 00:08:01.072: INFO: Pod pod-346a9463-b58c-48c2-b440-5e2417cdbe75 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:01.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1195" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1946,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:01.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-06c3811a-515d-4cf5-a500-18158993a156 STEP: Creating a pod to test consume secrets Apr 4 00:08:01.147: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276" in namespace "projected-5613" to be "Succeeded or Failed" Apr 4 00:08:01.185: INFO: Pod "pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276": Phase="Pending", Reason="", readiness=false. Elapsed: 38.00851ms Apr 4 00:08:03.189: INFO: Pod "pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042123275s Apr 4 00:08:05.193: INFO: Pod "pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045914049s STEP: Saw pod success Apr 4 00:08:05.193: INFO: Pod "pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276" satisfied condition "Succeeded or Failed" Apr 4 00:08:05.196: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276 container projected-secret-volume-test: STEP: delete the pod Apr 4 00:08:05.227: INFO: Waiting for pod pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276 to disappear Apr 4 00:08:05.250: INFO: Pod pod-projected-secrets-72a143ed-d478-438c-9711-8c34062f3276 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:05.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5613" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1956,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:05.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Apr 4 00:08:09.829: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-841 pod-service-account-9a7b8f4d-375d-44b6-b452-362570fa1dd0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 4 00:08:10.077: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-841 pod-service-account-9a7b8f4d-375d-44b6-b452-362570fa1dd0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 4 00:08:10.310: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-841 pod-service-account-9a7b8f4d-375d-44b6-b452-362570fa1dd0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:10.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-841" for this suite. • [SLOW TEST:5.241 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":119,"skipped":1978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:10.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 4 00:08:10.593: INFO: Waiting up to 5m0s for pod "pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2" in namespace "emptydir-722" to be "Succeeded or Failed" Apr 4 00:08:10.634: INFO: Pod "pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.639352ms Apr 4 00:08:12.638: INFO: Pod "pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044292132s Apr 4 00:08:14.642: INFO: Pod "pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048529009s STEP: Saw pod success Apr 4 00:08:14.642: INFO: Pod "pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2" satisfied condition "Succeeded or Failed" Apr 4 00:08:14.646: INFO: Trying to get logs from node latest-worker pod pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2 container test-container: STEP: delete the pod Apr 4 00:08:14.676: INFO: Waiting for pod pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2 to disappear Apr 4 00:08:14.692: INFO: Pod pod-6b5fc5b6-7248-4070-8b41-e99b7f9c3bf2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:14.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-722" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:14.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-3674 STEP: creating replication controller nodeport-test in namespace services-3674 I0404 00:08:14.842820 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3674, replica count: 2 I0404 00:08:17.893316 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:08:20.893567 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 00:08:20.893: INFO: Creating new exec pod Apr 4 00:08:25.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpodww9tk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 4 00:08:26.155: INFO: stderr: "I0404 00:08:26.052168 1121 log.go:172] (0xc000bdf340) (0xc000ae4960) Create stream\nI0404 00:08:26.054137 1121 log.go:172] (0xc000bdf340) (0xc000ae4960) Stream added, broadcasting: 1\nI0404 00:08:26.058504 1121 log.go:172] (0xc000bdf340) Reply frame received for 1\nI0404 00:08:26.058545 1121 log.go:172] (0xc000bdf340) (0xc0005c75e0) Create stream\nI0404 00:08:26.058554 1121 log.go:172] (0xc000bdf340) (0xc0005c75e0) Stream added, broadcasting: 3\nI0404 00:08:26.059724 1121 log.go:172] (0xc000bdf340) Reply frame received for 3\nI0404 00:08:26.059788 1121 log.go:172] (0xc000bdf340) (0xc0004eea00) Create stream\nI0404 00:08:26.059818 1121 log.go:172] (0xc000bdf340) (0xc0004eea00) Stream added, broadcasting: 5\nI0404 00:08:26.061343 1121 log.go:172] (0xc000bdf340) Reply frame received for 5\nI0404 00:08:26.149528 1121 log.go:172] (0xc000bdf340) Data frame received for 5\nI0404 00:08:26.149565 1121 log.go:172] (0xc0004eea00) (5) Data frame handling\nI0404 00:08:26.149578 1121 log.go:172] (0xc0004eea00) (5) Data frame sent\nI0404 00:08:26.149584 1121 log.go:172] (0xc000bdf340) Data frame received for 5\nI0404 00:08:26.149589 1121 log.go:172] (0xc0004eea00) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0404 00:08:26.149616 1121 log.go:172] (0xc0004eea00) (5) Data frame sent\nI0404 00:08:26.149691 1121 log.go:172] (0xc000bdf340) Data frame received for 5\nI0404 00:08:26.149702 1121 log.go:172] (0xc0004eea00) (5) Data frame handling\nI0404 00:08:26.149724 1121 log.go:172] (0xc000bdf340) Data frame received for 3\nI0404 00:08:26.149734 1121 log.go:172] (0xc0005c75e0) (3) Data frame handling\nI0404 00:08:26.151761 1121 log.go:172] (0xc000bdf340) Data frame received for 1\nI0404 00:08:26.151780 1121 log.go:172] (0xc000ae4960) (1) Data frame handling\nI0404 00:08:26.151796 1121 log.go:172] (0xc000ae4960) (1) Data frame sent\nI0404 00:08:26.151808 1121 log.go:172] (0xc000bdf340) (0xc000ae4960) Stream removed, broadcasting: 1\nI0404 00:08:26.152046 1121 log.go:172] (0xc000bdf340) Go away received\nI0404 00:08:26.152090 1121 log.go:172] (0xc000bdf340) (0xc000ae4960) Stream removed, broadcasting: 1\nI0404 00:08:26.152113 1121 log.go:172] (0xc000bdf340) (0xc0005c75e0) Stream removed, broadcasting: 3\nI0404 00:08:26.152124 1121 log.go:172] (0xc000bdf340) (0xc0004eea00) Stream removed, broadcasting: 5\n" Apr 4 00:08:26.155: INFO: stdout: "" Apr 4 00:08:26.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpodww9tk -- /bin/sh -x -c nc -zv -t -w 2 10.96.77.159 80' Apr 4 00:08:26.367: INFO: stderr: "I0404 00:08:26.293959 1140 log.go:172] (0xc00050a9a0) (0xc000938140) Create stream\nI0404 00:08:26.294007 1140 log.go:172] (0xc00050a9a0) (0xc000938140) Stream added, broadcasting: 1\nI0404 00:08:26.296660 1140 log.go:172] (0xc00050a9a0) Reply frame received for 1\nI0404 00:08:26.296700 1140 log.go:172] (0xc00050a9a0) (0xc000938280) Create stream\nI0404 00:08:26.296713 1140 log.go:172] (0xc00050a9a0) (0xc000938280) Stream added, broadcasting: 3\nI0404 00:08:26.297917 1140 log.go:172] (0xc00050a9a0) Reply frame received for 3\nI0404 00:08:26.297958 1140 log.go:172] (0xc00050a9a0) (0xc0006d1220) Create stream\nI0404 00:08:26.297968 1140 log.go:172] (0xc00050a9a0) (0xc0006d1220) Stream added, broadcasting: 5\nI0404 00:08:26.298757 1140 log.go:172] (0xc00050a9a0) Reply frame received for 5\nI0404 00:08:26.361324 1140 log.go:172] (0xc00050a9a0) Data frame received for 5\nI0404 00:08:26.361396 1140 log.go:172] (0xc0006d1220) (5) Data frame handling\nI0404 00:08:26.361422 1140 log.go:172] (0xc0006d1220) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.77.159 80\nConnection to 10.96.77.159 80 port [tcp/http] succeeded!\nI0404 00:08:26.361454 1140 log.go:172] (0xc00050a9a0) Data frame received for 5\nI0404 00:08:26.361479 1140 log.go:172] (0xc00050a9a0) Data frame received for 3\nI0404 00:08:26.361512 1140 log.go:172] (0xc000938280) (3) Data frame handling\nI0404 00:08:26.361533 1140 log.go:172] (0xc0006d1220) (5) Data frame handling\nI0404 00:08:26.362903 1140 log.go:172] (0xc00050a9a0) Data frame received for 1\nI0404 00:08:26.362923 1140 log.go:172] (0xc000938140) (1) Data frame handling\nI0404 00:08:26.362932 1140 log.go:172] (0xc000938140) (1) Data frame sent\nI0404 00:08:26.363032 1140 log.go:172] (0xc00050a9a0) (0xc000938140) Stream removed, broadcasting: 1\nI0404 00:08:26.363326 1140 log.go:172] (0xc00050a9a0) Go away received\nI0404 00:08:26.363591 1140 log.go:172] (0xc00050a9a0) (0xc000938140) Stream removed, broadcasting: 1\nI0404 00:08:26.363635 1140 log.go:172] (0xc00050a9a0) (0xc000938280) Stream removed, broadcasting: 3\nI0404 00:08:26.363657 1140 log.go:172] (0xc00050a9a0) (0xc0006d1220) Stream removed, broadcasting: 5\n" Apr 4 00:08:26.367: INFO: stdout: "" Apr 4 00:08:26.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpodww9tk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31422' Apr 4 00:08:26.556: INFO: stderr: "I0404 00:08:26.478554 1163 log.go:172] (0xc0004f2790) (0xc000892140) Create stream\nI0404 00:08:26.478602 1163 log.go:172] (0xc0004f2790) (0xc000892140) Stream added, broadcasting: 1\nI0404 00:08:26.480405 1163 log.go:172] (0xc0004f2790) Reply frame received for 1\nI0404 00:08:26.480427 1163 log.go:172] (0xc0004f2790) (0xc0008921e0) Create stream\nI0404 00:08:26.480442 1163 log.go:172] (0xc0004f2790) (0xc0008921e0) Stream added, broadcasting: 3\nI0404 00:08:26.481564 1163 log.go:172] (0xc0004f2790) Reply frame received for 3\nI0404 00:08:26.481613 1163 log.go:172] (0xc0004f2790) (0xc0004115e0) Create stream\nI0404 00:08:26.481628 1163 log.go:172] (0xc0004f2790) (0xc0004115e0) Stream added, broadcasting: 5\nI0404 00:08:26.482463 1163 log.go:172] (0xc0004f2790) Reply frame received for 5\nI0404 00:08:26.550435 1163 log.go:172] (0xc0004f2790) Data frame received for 5\nI0404 00:08:26.550479 1163 log.go:172] (0xc0004115e0) (5) Data frame handling\nI0404 00:08:26.550494 1163 log.go:172] (0xc0004115e0) (5) Data frame sent\nI0404 00:08:26.550506 1163 log.go:172] (0xc0004f2790) Data frame received for 5\nI0404 00:08:26.550516 1163 log.go:172] (0xc0004115e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31422\nConnection to 172.17.0.13 31422 port [tcp/31422] succeeded!\nI0404 00:08:26.550551 1163 log.go:172] (0xc0004f2790) Data frame received for 3\nI0404 00:08:26.550570 1163 log.go:172] (0xc0008921e0) (3) Data frame handling\nI0404 00:08:26.551683 1163 log.go:172] (0xc0004f2790) Data frame received for 1\nI0404 00:08:26.551697 1163 log.go:172] (0xc000892140) (1) Data frame handling\nI0404 00:08:26.551717 1163 log.go:172] (0xc000892140) (1) Data frame sent\nI0404 00:08:26.551740 1163 log.go:172] (0xc0004f2790) (0xc000892140) Stream removed, broadcasting: 1\nI0404 00:08:26.551830 1163 log.go:172] (0xc0004f2790) Go away received\nI0404 00:08:26.552240 1163 log.go:172] (0xc0004f2790) (0xc000892140) Stream removed, broadcasting: 1\nI0404 00:08:26.552271 1163 log.go:172] (0xc0004f2790) (0xc0008921e0) Stream removed, broadcasting: 3\nI0404 00:08:26.552285 1163 log.go:172] (0xc0004f2790) (0xc0004115e0) Stream removed, broadcasting: 5\n" Apr 4 00:08:26.556: INFO: stdout: "" Apr 4 00:08:26.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3674 execpodww9tk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31422' Apr 4 00:08:26.764: INFO: stderr: "I0404 00:08:26.678971 1184 log.go:172] (0xc00003ae70) (0xc0006500a0) Create stream\nI0404 00:08:26.679029 1184 log.go:172] (0xc00003ae70) (0xc0006500a0) Stream added, broadcasting: 1\nI0404 00:08:26.682128 1184 log.go:172] (0xc00003ae70) Reply frame received for 1\nI0404 00:08:26.682175 1184 log.go:172] (0xc00003ae70) (0xc000653220) Create stream\nI0404 00:08:26.682190 1184 log.go:172] (0xc00003ae70) (0xc000653220) Stream added, broadcasting: 3\nI0404 00:08:26.683203 1184 log.go:172] (0xc00003ae70) Reply frame received for 3\nI0404 00:08:26.683225 1184 log.go:172] (0xc00003ae70) (0xc000653400) Create stream\nI0404 00:08:26.683233 1184 log.go:172] (0xc00003ae70) (0xc000653400) Stream added, broadcasting: 5\nI0404 00:08:26.684269 1184 log.go:172] (0xc00003ae70) Reply frame received for 5\nI0404 00:08:26.756619 1184 log.go:172] (0xc00003ae70) Data frame received for 5\nI0404 00:08:26.756670 1184 log.go:172] (0xc000653400) (5) Data frame handling\nI0404 00:08:26.756712 1184 log.go:172] (0xc000653400) (5) Data frame sent\nI0404 00:08:26.756732 1184 log.go:172] (0xc00003ae70) Data frame received for 5\nI0404 00:08:26.756747 1184 log.go:172] (0xc000653400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31422\nConnection to 172.17.0.12 31422 port [tcp/31422] succeeded!\nI0404 00:08:26.756783 1184 log.go:172] (0xc00003ae70) Data frame received for 3\nI0404 00:08:26.756801 1184 log.go:172] (0xc000653220) (3) Data frame handling\nI0404 00:08:26.758671 1184 log.go:172] (0xc00003ae70) Data frame received for 1\nI0404 00:08:26.758701 1184 log.go:172] (0xc0006500a0) (1) Data frame handling\nI0404 00:08:26.758734 1184 log.go:172] (0xc0006500a0) (1) Data frame sent\nI0404 00:08:26.758756 1184 log.go:172] (0xc00003ae70) (0xc0006500a0) Stream removed, broadcasting: 1\nI0404 00:08:26.758800 1184 log.go:172] (0xc00003ae70) Go away received\nI0404 00:08:26.759142 1184 log.go:172] (0xc00003ae70) (0xc0006500a0) Stream removed, broadcasting: 1\nI0404 00:08:26.759169 1184 log.go:172] (0xc00003ae70) (0xc000653220) Stream removed, broadcasting: 3\nI0404 00:08:26.759189 1184 log.go:172] (0xc00003ae70) (0xc000653400) Stream removed, broadcasting: 5\n" Apr 4 00:08:26.764: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:26.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3674" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.075 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":121,"skipped":2063,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:26.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 4 00:08:34.969: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 00:08:34.987: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 00:08:36.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 00:08:37.005: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 00:08:38.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 00:08:38.991: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 00:08:40.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 00:08:40.992: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 00:08:42.987: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 00:08:42.991: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:42.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4918" for this suite. • [SLOW TEST:16.225 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2074,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:43.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:08:43.112: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-66452e86-bdb9-4ca3-8381-6ab4db5970f2" in namespace "security-context-test-5328" to be "Succeeded or Failed" Apr 4 00:08:43.116: INFO: Pod "busybox-privileged-false-66452e86-bdb9-4ca3-8381-6ab4db5970f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523367ms Apr 4 00:08:45.120: INFO: Pod "busybox-privileged-false-66452e86-bdb9-4ca3-8381-6ab4db5970f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00787009s Apr 4 00:08:47.125: INFO: Pod "busybox-privileged-false-66452e86-bdb9-4ca3-8381-6ab4db5970f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012142936s Apr 4 00:08:47.125: INFO: Pod "busybox-privileged-false-66452e86-bdb9-4ca3-8381-6ab4db5970f2" satisfied condition "Succeeded or Failed" Apr 4 00:08:47.132: INFO: Got logs for pod "busybox-privileged-false-66452e86-bdb9-4ca3-8381-6ab4db5970f2": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:47.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5328" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:47.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-83c8a01d-1ea2-4700-b648-26d927f564ff STEP: Creating secret with name s-test-opt-upd-fa6d7068-f674-42f4-8a0c-e9c1acfa0994 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-83c8a01d-1ea2-4700-b648-26d927f564ff STEP: Updating secret s-test-opt-upd-fa6d7068-f674-42f4-8a0c-e9c1acfa0994 STEP: Creating secret with name s-test-opt-create-5e7d1a8b-0305-473d-8788-2752111647a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:08:55.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5126" for this suite. • [SLOW TEST:8.197 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2103,"failed":0} SSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:08:55.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:09:09.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1014" for this suite. • [SLOW TEST:14.099 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":125,"skipped":2107,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:09:09.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5060 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5060 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5060 Apr 4 00:09:09.873: INFO: Found 0 stateful pods, waiting for 1 Apr 4 00:09:19.877: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 4 00:09:19.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:09:20.166: INFO: stderr: "I0404 00:09:20.031282 1207 log.go:172] (0xc00003aa50) (0xc000611400) Create stream\nI0404 00:09:20.031364 1207 log.go:172] (0xc00003aa50) (0xc000611400) Stream added, broadcasting: 1\nI0404 00:09:20.034604 1207 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0404 00:09:20.034658 1207 log.go:172] (0xc00003aa50) (0xc0008fe000) Create stream\nI0404 00:09:20.034678 1207 log.go:172] (0xc00003aa50) (0xc0008fe000) Stream added, broadcasting: 3\nI0404 00:09:20.035792 1207 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0404 00:09:20.035835 1207 log.go:172] (0xc00003aa50) (0xc000a1a000) Create stream\nI0404 00:09:20.035855 1207 log.go:172] (0xc00003aa50) (0xc000a1a000) Stream added, broadcasting: 5\nI0404 00:09:20.036950 1207 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0404 00:09:20.131236 1207 log.go:172] (0xc00003aa50) Data frame received for 5\nI0404 00:09:20.131269 1207 log.go:172] (0xc000a1a000) (5) Data frame handling\nI0404 00:09:20.131299 1207 log.go:172] (0xc000a1a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:09:20.159365 1207 log.go:172] (0xc00003aa50) Data frame received for 3\nI0404 00:09:20.159391 1207 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0404 00:09:20.159411 1207 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0404 00:09:20.159662 1207 log.go:172] (0xc00003aa50) Data frame received for 5\nI0404 00:09:20.159678 1207 log.go:172] (0xc000a1a000) (5) Data frame handling\nI0404 00:09:20.159696 1207 log.go:172] (0xc00003aa50) Data frame received for 3\nI0404 00:09:20.159717 1207 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0404 00:09:20.161547 1207 log.go:172] (0xc00003aa50) Data frame received for 1\nI0404 00:09:20.161561 1207 log.go:172] (0xc000611400) (1) Data frame handling\nI0404 00:09:20.161576 1207 log.go:172] (0xc000611400) (1) Data frame sent\nI0404 00:09:20.161645 1207 log.go:172] (0xc00003aa50) (0xc000611400) Stream removed, broadcasting: 1\nI0404 00:09:20.161933 1207 log.go:172] (0xc00003aa50) Go away received\nI0404 00:09:20.161970 1207 log.go:172] (0xc00003aa50) (0xc000611400) Stream removed, broadcasting: 1\nI0404 00:09:20.162000 1207 log.go:172] (0xc00003aa50) (0xc0008fe000) Stream removed, broadcasting: 3\nI0404 00:09:20.162018 1207 log.go:172] (0xc00003aa50) (0xc000a1a000) Stream removed, broadcasting: 5\n" Apr 4 00:09:20.167: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:09:20.167: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:09:20.171: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 4 00:09:30.175: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:09:30.175: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:09:30.195: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999686s Apr 4 00:09:31.200: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994602319s Apr 4 00:09:32.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990159814s Apr 4 00:09:33.208: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985530097s Apr 4 00:09:34.212: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981771633s Apr 4 00:09:35.217: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.977475729s Apr 4 00:09:36.221: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.972824226s Apr 4 00:09:37.226: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.968366291s Apr 4 00:09:38.230: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.963800622s Apr 4 00:09:39.234: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.341017ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5060 Apr 4 00:09:40.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:09:40.464: INFO: stderr: "I0404 00:09:40.379024 1230 log.go:172] (0xc000b0a630) (0xc0009f2000) Create stream\nI0404 00:09:40.379099 1230 log.go:172] (0xc000b0a630) (0xc0009f2000) Stream added, broadcasting: 1\nI0404 00:09:40.382234 1230 log.go:172] (0xc000b0a630) Reply frame received for 1\nI0404 00:09:40.382270 1230 log.go:172] (0xc000b0a630) (0xc0009c8000) Create stream\nI0404 00:09:40.382284 1230 log.go:172] (0xc000b0a630) (0xc0009c8000) Stream added, broadcasting: 3\nI0404 00:09:40.383269 1230 log.go:172] (0xc000b0a630) Reply frame received for 3\nI0404 00:09:40.383302 1230 log.go:172] (0xc000b0a630) (0xc0009c80a0) Create stream\nI0404 00:09:40.383341 1230 log.go:172] (0xc000b0a630) (0xc0009c80a0) Stream added, broadcasting: 5\nI0404 00:09:40.384272 1230 log.go:172] (0xc000b0a630) Reply frame received for 5\nI0404 00:09:40.458117 1230 log.go:172] (0xc000b0a630) Data frame received for 3\nI0404 00:09:40.458168 1230 log.go:172] (0xc0009c8000) (3) Data frame handling\nI0404 00:09:40.458206 1230 log.go:172] (0xc0009c8000) (3) Data frame sent\nI0404 00:09:40.458233 1230 log.go:172] (0xc000b0a630) Data frame received for 3\nI0404 00:09:40.458258 1230 log.go:172] (0xc0009c8000) (3) Data frame handling\nI0404 00:09:40.458289 1230 log.go:172] (0xc000b0a630) Data frame received for 5\nI0404 00:09:40.458312 1230 log.go:172] (0xc0009c80a0) (5) Data frame handling\nI0404 00:09:40.458325 1230 log.go:172] (0xc0009c80a0) (5) Data frame sent\nI0404 00:09:40.458337 1230 log.go:172] (0xc000b0a630) Data frame received for 5\nI0404 00:09:40.458346 1230 log.go:172] (0xc0009c80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 00:09:40.460045 1230 log.go:172] (0xc000b0a630) Data frame received for 1\nI0404 00:09:40.460078 1230 log.go:172] (0xc0009f2000) (1) Data frame handling\nI0404 00:09:40.460106 1230 log.go:172] (0xc0009f2000) (1) Data frame sent\nI0404 00:09:40.460144 1230 log.go:172] (0xc000b0a630) (0xc0009f2000) Stream removed, broadcasting: 1\nI0404 00:09:40.460187 1230 log.go:172] (0xc000b0a630) Go away received\nI0404 00:09:40.460667 1230 log.go:172] (0xc000b0a630) (0xc0009f2000) Stream removed, broadcasting: 1\nI0404 00:09:40.460687 1230 log.go:172] (0xc000b0a630) (0xc0009c8000) Stream removed, broadcasting: 3\nI0404 00:09:40.460697 1230 log.go:172] (0xc000b0a630) (0xc0009c80a0) Stream removed, broadcasting: 5\n" Apr 4 00:09:40.464: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:09:40.464: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:09:40.468: INFO: Found 1 stateful pods, waiting for 3 Apr 4 00:09:50.473: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:09:50.473: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:09:50.473: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 4 00:09:50.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:09:50.729: INFO: stderr: "I0404 00:09:50.616728 1252 log.go:172] (0xc00003a420) (0xc00090e000) Create stream\nI0404 00:09:50.616793 1252 log.go:172] (0xc00003a420) (0xc00090e000) Stream added, broadcasting: 1\nI0404 00:09:50.619702 1252 log.go:172] (0xc00003a420) Reply frame received for 1\nI0404 00:09:50.619745 1252 log.go:172] (0xc00003a420) (0xc0006195e0) Create stream\nI0404 00:09:50.619756 1252 log.go:172] (0xc00003a420) (0xc0006195e0) Stream added, broadcasting: 3\nI0404 00:09:50.620990 1252 log.go:172] (0xc00003a420) Reply frame received for 3\nI0404 00:09:50.621029 1252 log.go:172] (0xc00003a420) (0xc00090e0a0) Create stream\nI0404 00:09:50.621041 1252 log.go:172] (0xc00003a420) (0xc00090e0a0) Stream added, broadcasting: 5\nI0404 00:09:50.622320 1252 log.go:172] (0xc00003a420) Reply frame received for 5\nI0404 00:09:50.720972 1252 log.go:172] (0xc00003a420) Data frame received for 5\nI0404 00:09:50.721019 1252 log.go:172] (0xc00090e0a0) (5) Data frame handling\nI0404 00:09:50.721049 1252 log.go:172] (0xc00090e0a0) (5) Data frame sent\nI0404 00:09:50.721070 1252 log.go:172] (0xc00003a420) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:09:50.721096 1252 log.go:172] (0xc00003a420) Data frame received for 3\nI0404 00:09:50.721298 1252 log.go:172] (0xc0006195e0) (3) Data frame handling\nI0404 00:09:50.721338 1252 log.go:172] (0xc0006195e0) (3) Data frame sent\nI0404 00:09:50.721370 1252 log.go:172] (0xc00003a420) Data frame received for 3\nI0404 00:09:50.721392 1252 log.go:172] (0xc0006195e0) (3) Data frame handling\nI0404 00:09:50.721445 1252 log.go:172] (0xc00090e0a0) (5) Data frame handling\nI0404 00:09:50.722843 1252 log.go:172] (0xc00003a420) Data frame received for 1\nI0404 00:09:50.722873 1252 log.go:172] (0xc00090e000) (1) Data frame handling\nI0404 00:09:50.722887 1252 log.go:172] (0xc00090e000) (1) Data frame sent\nI0404 00:09:50.722903 1252 log.go:172] (0xc00003a420) (0xc00090e000) Stream removed, broadcasting: 1\nI0404 00:09:50.722923 1252 log.go:172] (0xc00003a420) Go away received\nI0404 00:09:50.723352 1252 log.go:172] (0xc00003a420) (0xc00090e000) Stream removed, broadcasting: 1\nI0404 00:09:50.723380 1252 log.go:172] (0xc00003a420) (0xc0006195e0) Stream removed, broadcasting: 3\nI0404 00:09:50.723393 1252 log.go:172] (0xc00003a420) (0xc00090e0a0) Stream removed, broadcasting: 5\n" Apr 4 00:09:50.729: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:09:50.729: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:09:50.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:09:51.004: INFO: stderr: "I0404 00:09:50.885695 1274 log.go:172] (0xc000a2c370) (0xc000644140) Create stream\nI0404 00:09:50.885779 1274 log.go:172] (0xc000a2c370) (0xc000644140) Stream added, broadcasting: 1\nI0404 00:09:50.888579 1274 log.go:172] (0xc000a2c370) Reply frame received for 1\nI0404 00:09:50.888620 1274 log.go:172] (0xc000a2c370) (0xc000661360) Create stream\nI0404 00:09:50.888639 1274 log.go:172] (0xc000a2c370) (0xc000661360) Stream added, broadcasting: 3\nI0404 00:09:50.889678 1274 log.go:172] (0xc000a2c370) Reply frame received for 3\nI0404 00:09:50.889752 1274 log.go:172] (0xc000a2c370) (0xc0008e8000) Create stream\nI0404 00:09:50.889772 1274 log.go:172] (0xc000a2c370) (0xc0008e8000) Stream added, broadcasting: 5\nI0404 00:09:50.890583 1274 log.go:172] (0xc000a2c370) Reply frame received for 5\nI0404 00:09:50.954056 1274 log.go:172] (0xc000a2c370) Data frame received for 5\nI0404 00:09:50.954093 1274 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0404 00:09:50.954108 1274 log.go:172] (0xc0008e8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:09:50.998267 1274 log.go:172] (0xc000a2c370) Data frame received for 3\nI0404 00:09:50.998295 1274 log.go:172] (0xc000661360) (3) Data frame handling\nI0404 00:09:50.998312 1274 log.go:172] (0xc000661360) (3) Data frame sent\nI0404 00:09:50.998495 1274 log.go:172] (0xc000a2c370) Data frame received for 3\nI0404 00:09:50.998514 1274 log.go:172] (0xc000661360) (3) Data frame handling\nI0404 00:09:50.998777 1274 log.go:172] (0xc000a2c370) Data frame received for 5\nI0404 00:09:50.998802 1274 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0404 00:09:51.000467 1274 log.go:172] (0xc000a2c370) Data frame received for 1\nI0404 00:09:51.000489 1274 log.go:172] (0xc000644140) (1) Data frame handling\nI0404 00:09:51.000503 1274 log.go:172] (0xc000644140) (1) Data frame sent\nI0404 00:09:51.000516 1274 log.go:172] (0xc000a2c370) (0xc000644140) Stream removed, broadcasting: 1\nI0404 00:09:51.000671 1274 log.go:172] (0xc000a2c370) Go away received\nI0404 00:09:51.000864 1274 log.go:172] (0xc000a2c370) (0xc000644140) Stream removed, broadcasting: 1\nI0404 00:09:51.000885 1274 log.go:172] (0xc000a2c370) (0xc000661360) Stream removed, broadcasting: 3\nI0404 00:09:51.000896 1274 log.go:172] (0xc000a2c370) (0xc0008e8000) Stream removed, broadcasting: 5\n" Apr 4 00:09:51.004: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:09:51.004: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:09:51.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:09:51.248: INFO: stderr: "I0404 00:09:51.155211 1296 log.go:172] (0xc000a58000) (0xc0006e12c0) Create stream\nI0404 00:09:51.155269 1296 log.go:172] (0xc000a58000) (0xc0006e12c0) Stream added, broadcasting: 1\nI0404 00:09:51.157714 1296 log.go:172] (0xc000a58000) Reply frame received for 1\nI0404 00:09:51.157761 1296 log.go:172] (0xc000a58000) (0xc0002f2000) Create stream\nI0404 00:09:51.157779 1296 log.go:172] (0xc000a58000) (0xc0002f2000) Stream added, broadcasting: 3\nI0404 00:09:51.158449 1296 log.go:172] (0xc000a58000) Reply frame received for 3\nI0404 00:09:51.158483 1296 log.go:172] (0xc000a58000) (0xc000474b40) Create stream\nI0404 00:09:51.158495 1296 log.go:172] (0xc000a58000) (0xc000474b40) Stream added, broadcasting: 5\nI0404 00:09:51.159068 1296 log.go:172] (0xc000a58000) Reply frame received for 5\nI0404 00:09:51.215809 1296 log.go:172] (0xc000a58000) Data frame received for 5\nI0404 00:09:51.215856 1296 log.go:172] (0xc000474b40) (5) Data frame handling\nI0404 00:09:51.215879 1296 log.go:172] (0xc000474b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:09:51.240636 1296 log.go:172] (0xc000a58000) Data frame received for 3\nI0404 00:09:51.240661 1296 log.go:172] (0xc0002f2000) (3) Data frame handling\nI0404 00:09:51.240682 1296 log.go:172] (0xc0002f2000) (3) Data frame sent\nI0404 00:09:51.241055 1296 log.go:172] (0xc000a58000) Data frame received for 3\nI0404 00:09:51.241082 1296 log.go:172] (0xc0002f2000) (3) Data frame handling\nI0404 00:09:51.241104 1296 log.go:172] (0xc000a58000) Data frame received for 5\nI0404 00:09:51.241264 1296 log.go:172] (0xc000474b40) (5) Data frame handling\nI0404 00:09:51.242867 1296 log.go:172] (0xc000a58000) Data frame received for 1\nI0404 00:09:51.242885 1296 log.go:172] (0xc0006e12c0) (1) Data frame handling\nI0404 00:09:51.242898 1296 log.go:172] (0xc0006e12c0) (1) Data frame sent\nI0404 00:09:51.242910 1296 log.go:172] (0xc000a58000) (0xc0006e12c0) Stream removed, broadcasting: 1\nI0404 00:09:51.242924 1296 log.go:172] (0xc000a58000) Go away received\nI0404 00:09:51.243575 1296 log.go:172] (0xc000a58000) (0xc0006e12c0) Stream removed, broadcasting: 1\nI0404 00:09:51.243617 1296 log.go:172] (0xc000a58000) (0xc0002f2000) Stream removed, broadcasting: 3\nI0404 00:09:51.243640 1296 log.go:172] (0xc000a58000) (0xc000474b40) Stream removed, broadcasting: 5\n" Apr 4 00:09:51.248: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:09:51.248: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:09:51.248: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:09:51.252: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 4 00:10:01.260: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:10:01.260: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:10:01.260: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:10:01.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999658s Apr 4 00:10:02.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992027608s Apr 4 00:10:03.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987211815s Apr 4 00:10:04.288: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982762404s Apr 4 00:10:05.293: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978149618s Apr 4 00:10:06.297: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973253136s Apr 4 00:10:07.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968777355s Apr 4 00:10:08.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964464289s Apr 4 00:10:09.331: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959593568s Apr 4 00:10:10.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.433947ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5060 Apr 4 00:10:11.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:10:11.567: INFO: stderr: "I0404 00:10:11.475915 1316 log.go:172] (0xc000a0cd10) (0xc000954280) Create stream\nI0404 00:10:11.475993 1316 log.go:172] (0xc000a0cd10) (0xc000954280) Stream added, broadcasting: 1\nI0404 00:10:11.479027 1316 log.go:172] (0xc000a0cd10) Reply frame received for 1\nI0404 00:10:11.479074 1316 log.go:172] (0xc000a0cd10) (0xc0009f0320) Create stream\nI0404 00:10:11.479093 1316 log.go:172] (0xc000a0cd10) (0xc0009f0320) Stream added, broadcasting: 3\nI0404 00:10:11.480094 1316 log.go:172] (0xc000a0cd10) Reply frame received for 3\nI0404 00:10:11.480128 1316 log.go:172] (0xc000a0cd10) (0xc0009f03c0) Create stream\nI0404 00:10:11.480139 1316 log.go:172] (0xc000a0cd10) (0xc0009f03c0) Stream added, broadcasting: 5\nI0404 00:10:11.481026 1316 log.go:172] (0xc000a0cd10) Reply frame received for 5\nI0404 00:10:11.561681 1316 log.go:172] (0xc000a0cd10) Data frame received for 5\nI0404 00:10:11.561714 1316 log.go:172] (0xc0009f03c0) (5) Data frame handling\nI0404 00:10:11.561732 1316 log.go:172] (0xc0009f03c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 00:10:11.561766 1316 log.go:172] (0xc000a0cd10) Data frame received for 3\nI0404 00:10:11.561791 1316 log.go:172] (0xc0009f0320) (3) Data frame handling\nI0404 00:10:11.561800 1316 log.go:172] (0xc0009f0320) (3) Data frame sent\nI0404 00:10:11.561811 1316 log.go:172] (0xc000a0cd10) Data frame received for 3\nI0404 00:10:11.561818 1316 log.go:172] (0xc0009f0320) (3) Data frame handling\nI0404 00:10:11.561834 1316 log.go:172] (0xc000a0cd10) Data frame received for 5\nI0404 00:10:11.561842 1316 log.go:172] (0xc0009f03c0) (5) Data frame handling\nI0404 00:10:11.563240 1316 log.go:172] (0xc000a0cd10) Data frame received for 1\nI0404 00:10:11.563266 1316 log.go:172] (0xc000954280) (1) Data frame handling\nI0404 00:10:11.563289 1316 log.go:172] (0xc000954280) (1) Data frame sent\nI0404 00:10:11.563383 1316 log.go:172] (0xc000a0cd10) (0xc000954280) Stream removed, broadcasting: 1\nI0404 00:10:11.563465 1316 log.go:172] (0xc000a0cd10) Go away received\nI0404 00:10:11.563711 1316 log.go:172] (0xc000a0cd10) (0xc000954280) Stream removed, broadcasting: 1\nI0404 00:10:11.563733 1316 log.go:172] (0xc000a0cd10) (0xc0009f0320) Stream removed, broadcasting: 3\nI0404 00:10:11.563742 1316 log.go:172] (0xc000a0cd10) (0xc0009f03c0) Stream removed, broadcasting: 5\n" Apr 4 00:10:11.567: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:10:11.567: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:10:11.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:10:11.761: INFO: stderr: "I0404 00:10:11.690080 1335 log.go:172] (0xc000a000b0) (0xc00041ca00) Create stream\nI0404 00:10:11.690138 1335 log.go:172] (0xc000a000b0) (0xc00041ca00) Stream added, broadcasting: 1\nI0404 00:10:11.693284 1335 log.go:172] (0xc000a000b0) Reply frame received for 1\nI0404 00:10:11.693331 1335 log.go:172] (0xc000a000b0) (0xc0009d6000) Create stream\nI0404 00:10:11.693343 1335 log.go:172] (0xc000a000b0) (0xc0009d6000) Stream added, broadcasting: 3\nI0404 00:10:11.694512 1335 log.go:172] (0xc000a000b0) Reply frame received for 3\nI0404 00:10:11.694560 1335 log.go:172] (0xc000a000b0) (0xc0009d60a0) Create stream\nI0404 00:10:11.694572 1335 log.go:172] (0xc000a000b0) (0xc0009d60a0) Stream added, broadcasting: 5\nI0404 00:10:11.695566 1335 log.go:172] (0xc000a000b0) Reply frame received for 5\nI0404 00:10:11.753654 1335 log.go:172] (0xc000a000b0) Data frame received for 5\nI0404 00:10:11.753714 1335 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0404 00:10:11.753745 1335 log.go:172] (0xc0009d60a0) (5) Data frame sent\nI0404 00:10:11.753766 1335 log.go:172] (0xc000a000b0) Data frame received for 5\nI0404 00:10:11.753794 1335 log.go:172] (0xc0009d60a0) (5) Data frame handling\nI0404 00:10:11.753817 1335 log.go:172] (0xc000a000b0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 00:10:11.753833 1335 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0404 00:10:11.753867 1335 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0404 00:10:11.754159 1335 log.go:172] (0xc000a000b0) Data frame received for 3\nI0404 00:10:11.754185 1335 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0404 00:10:11.755805 1335 log.go:172] (0xc000a000b0) Data frame received for 1\nI0404 00:10:11.755838 1335 log.go:172] (0xc00041ca00) (1) Data frame handling\nI0404 00:10:11.755872 1335 log.go:172] (0xc00041ca00) (1) Data frame sent\nI0404 00:10:11.756033 1335 log.go:172] (0xc000a000b0) (0xc00041ca00) Stream removed, broadcasting: 1\nI0404 00:10:11.756084 1335 log.go:172] (0xc000a000b0) Go away received\nI0404 00:10:11.756429 1335 log.go:172] (0xc000a000b0) (0xc00041ca00) Stream removed, broadcasting: 1\nI0404 00:10:11.756448 1335 log.go:172] (0xc000a000b0) (0xc0009d6000) Stream removed, broadcasting: 3\nI0404 00:10:11.756456 1335 log.go:172] (0xc000a000b0) (0xc0009d60a0) Stream removed, broadcasting: 5\n" Apr 4 00:10:11.761: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:10:11.761: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:10:11.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5060 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:10:11.969: INFO: stderr: "I0404 00:10:11.884572 1357 log.go:172] (0xc00056db80) (0xc000ae2000) Create stream\nI0404 00:10:11.884638 1357 log.go:172] (0xc00056db80) (0xc000ae2000) Stream added, broadcasting: 1\nI0404 00:10:11.887172 1357 log.go:172] (0xc00056db80) Reply frame received for 1\nI0404 00:10:11.887223 1357 log.go:172] (0xc00056db80) (0xc00079d360) Create stream\nI0404 00:10:11.887236 1357 log.go:172] (0xc00056db80) (0xc00079d360) Stream added, broadcasting: 3\nI0404 00:10:11.888316 1357 log.go:172] (0xc00056db80) Reply frame received for 3\nI0404 00:10:11.888356 1357 log.go:172] (0xc00056db80) (0xc000ae20a0) Create stream\nI0404 00:10:11.888373 1357 log.go:172] (0xc00056db80) (0xc000ae20a0) Stream added, broadcasting: 5\nI0404 00:10:11.889554 1357 log.go:172] (0xc00056db80) Reply frame received for 5\nI0404 00:10:11.962507 1357 log.go:172] (0xc00056db80) Data frame received for 3\nI0404 00:10:11.962554 1357 log.go:172] (0xc00079d360) (3) Data frame handling\nI0404 00:10:11.962575 1357 log.go:172] (0xc00079d360) (3) Data frame sent\nI0404 00:10:11.962602 1357 log.go:172] (0xc00056db80) Data frame received for 3\nI0404 00:10:11.962617 1357 log.go:172] (0xc00079d360) (3) Data frame handling\nI0404 00:10:11.962648 1357 log.go:172] (0xc00056db80) Data frame received for 5\nI0404 00:10:11.962666 1357 log.go:172] (0xc000ae20a0) (5) Data frame handling\nI0404 00:10:11.962698 1357 log.go:172] (0xc000ae20a0) (5) Data frame sent\nI0404 00:10:11.962711 1357 log.go:172] (0xc00056db80) Data frame received for 5\nI0404 00:10:11.962722 1357 log.go:172] (0xc000ae20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 00:10:11.964400 1357 log.go:172] (0xc00056db80) Data frame received for 1\nI0404 00:10:11.964423 1357 log.go:172] (0xc000ae2000) (1) Data frame handling\nI0404 00:10:11.964438 1357 log.go:172] (0xc000ae2000) (1) Data frame sent\nI0404 00:10:11.964459 1357 log.go:172] (0xc00056db80) (0xc000ae2000) Stream removed, broadcasting: 1\nI0404 00:10:11.964480 1357 log.go:172] (0xc00056db80) Go away received\nI0404 00:10:11.964879 1357 log.go:172] (0xc00056db80) (0xc000ae2000) Stream removed, broadcasting: 1\nI0404 00:10:11.964895 1357 log.go:172] (0xc00056db80) (0xc00079d360) Stream removed, broadcasting: 3\nI0404 00:10:11.964903 1357 log.go:172] (0xc00056db80) (0xc000ae20a0) Stream removed, broadcasting: 5\n" Apr 4 00:10:11.969: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:10:11.969: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:10:11.969: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 00:10:41.996: INFO: Deleting all statefulset in ns statefulset-5060 Apr 4 00:10:41.999: INFO: Scaling statefulset ss to 0 Apr 4 00:10:42.008: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:10:42.010: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:10:42.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5060" for this suite. • [SLOW TEST:92.590 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":126,"skipped":2122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:10:42.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:10:42.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852" in namespace "downward-api-7145" to be "Succeeded or Failed" Apr 4 00:10:42.103: INFO: Pod "downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852": Phase="Pending", Reason="", readiness=false. Elapsed: 3.567199ms Apr 4 00:10:44.107: INFO: Pod "downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007761385s Apr 4 00:10:46.111: INFO: Pod "downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011692251s STEP: Saw pod success Apr 4 00:10:46.111: INFO: Pod "downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852" satisfied condition "Succeeded or Failed" Apr 4 00:10:46.114: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852 container client-container: STEP: delete the pod Apr 4 00:10:46.146: INFO: Waiting for pod downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852 to disappear Apr 4 00:10:46.151: INFO: Pod downwardapi-volume-8e691521-e0d6-4da5-8590-bcc259f97852 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:10:46.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7145" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2154,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:10:46.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-22782e4c-fba6-4575-b5ff-ce2145a7b432 STEP: Creating a pod to test consume configMaps Apr 4 00:10:46.237: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b" in namespace "projected-7670" to be "Succeeded or Failed" Apr 4 00:10:46.251: INFO: Pod "pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.88878ms Apr 4 00:10:48.255: INFO: Pod "pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018145134s Apr 4 00:10:50.260: INFO: Pod "pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022370287s STEP: Saw pod success Apr 4 00:10:50.260: INFO: Pod "pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b" satisfied condition "Succeeded or Failed" Apr 4 00:10:50.262: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b container projected-configmap-volume-test: STEP: delete the pod Apr 4 00:10:50.295: INFO: Waiting for pod pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b to disappear Apr 4 00:10:50.311: INFO: Pod pod-projected-configmaps-1a2285b6-99a8-4004-98fd-778e05e50d4b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:10:50.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7670" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2166,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:10:50.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-df76df98-242b-468c-9b4f-78e049cc7423 STEP: Creating secret with name secret-projected-all-test-volume-8a3cb05a-3ae4-4252-9746-9fed14d3e1c9 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 4 00:10:50.440: INFO: Waiting up to 5m0s for pod "projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6" in namespace "projected-1161" to be "Succeeded or Failed" Apr 4 00:10:50.443: INFO: Pod "projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.390882ms Apr 4 00:10:52.446: INFO: Pod "projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00685862s Apr 4 00:10:54.450: INFO: Pod "projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010881399s STEP: Saw pod success Apr 4 00:10:54.451: INFO: Pod "projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6" satisfied condition "Succeeded or Failed" Apr 4 00:10:54.453: INFO: Trying to get logs from node latest-worker pod projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6 container projected-all-volume-test: STEP: delete the pod Apr 4 00:10:54.502: INFO: Waiting for pod projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6 to disappear Apr 4 00:10:54.521: INFO: Pod projected-volume-382bf311-cb57-4c13-b4f7-b7897f0823e6 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:10:54.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1161" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:10:54.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 4 00:10:55.110: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 4 00:10:57.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555855, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555855, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721555855, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:11:00.140: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:11:00.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:11:01.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3861" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.940 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":130,"skipped":2199,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:11:01.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:11:05.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4920" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2220,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:11:05.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:11:16.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2348" for this suite. • [SLOW TEST:11.143 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":132,"skipped":2224,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:11:16.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2678 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Apr 4 00:11:16.853: INFO: Found 0 stateful pods, waiting for 3 Apr 4 00:11:26.857: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:11:26.857: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:11:26.857: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 4 00:11:36.877: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:11:36.877: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:11:36.877: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 4 00:11:36.903: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 4 00:11:46.992: INFO: Updating stateful set ss2 Apr 4 00:11:47.032: INFO: Waiting for Pod statefulset-2678/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 4 00:11:57.215: INFO: Found 2 stateful pods, waiting for 3 Apr 4 00:12:07.219: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:12:07.219: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:12:07.219: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 4 00:12:07.239: INFO: Updating stateful set ss2 Apr 4 00:12:07.254: INFO: Waiting for Pod statefulset-2678/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 4 00:12:17.279: INFO: Updating stateful set ss2 Apr 4 00:12:17.345: INFO: Waiting for StatefulSet statefulset-2678/ss2 to complete update Apr 4 00:12:17.345: INFO: Waiting for Pod statefulset-2678/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 00:12:27.353: INFO: Deleting all statefulset in ns statefulset-2678 Apr 4 00:12:27.356: INFO: Scaling statefulset ss2 to 0 Apr 4 00:12:37.375: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:12:37.378: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:12:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2678" for this suite. • [SLOW TEST:80.672 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":133,"skipped":2231,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:12:37.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 4 00:12:37.556: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:12:52.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3444" for this suite. • [SLOW TEST:14.857 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":134,"skipped":2250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:12:52.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 4 00:12:59.376: INFO: 10 pods remaining Apr 4 00:12:59.376: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:12:59.376: INFO: Apr 4 00:13:00.002: INFO: 10 pods remaining Apr 4 00:13:00.002: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:00.002: INFO: Apr 4 00:13:00.644: INFO: 10 pods remaining Apr 4 00:13:00.644: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:00.644: INFO: Apr 4 00:13:01.635: INFO: 10 pods remaining Apr 4 00:13:01.635: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:01.635: INFO: Apr 4 00:13:02.636: INFO: 10 pods remaining Apr 4 00:13:02.636: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:02.636: INFO: Apr 4 00:13:03.636: INFO: 10 pods remaining Apr 4 00:13:03.636: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:03.636: INFO: Apr 4 00:13:04.636: INFO: 10 pods remaining Apr 4 00:13:04.636: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:04.636: INFO: Apr 4 00:13:05.635: INFO: 10 pods remaining Apr 4 00:13:05.636: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:05.636: INFO: Apr 4 00:13:06.634: INFO: 10 pods remaining Apr 4 00:13:06.634: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:06.634: INFO: Apr 4 00:13:07.635: INFO: 10 pods remaining Apr 4 00:13:07.635: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:07.635: INFO: Apr 4 00:13:08.635: INFO: 10 pods remaining Apr 4 00:13:08.635: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:08.635: INFO: Apr 4 00:13:09.634: INFO: 10 pods remaining Apr 4 00:13:09.634: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:09.634: INFO: Apr 4 00:13:10.634: INFO: 10 pods remaining Apr 4 00:13:10.634: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:10.634: INFO: Apr 4 00:13:11.635: INFO: 10 pods remaining Apr 4 00:13:11.635: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:11.635: INFO: Apr 4 00:13:12.634: INFO: 10 pods remaining Apr 4 00:13:12.634: INFO: 10 pods has nil DeletionTimestamp Apr 4 00:13:12.634: INFO: Apr 4 00:13:13.668: INFO: 0 pods remaining Apr 4 00:13:13.668: INFO: 0 pods has nil DeletionTimestamp Apr 4 00:13:13.668: INFO: STEP: Gathering metrics W0404 00:13:14.638008 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 00:13:14.638: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:13:14.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7904" for this suite. • [SLOW TEST:22.373 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":135,"skipped":2275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:13:14.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-654.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-654.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-654.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 00:13:20.983: INFO: DNS probes using dns-test-dd44d0b9-0a0b-451f-bc43-dbe8f46fb667 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-654.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-654.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-654.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 00:13:27.072: INFO: File wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:27.076: INFO: File jessie_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:27.076: INFO: Lookups using dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c failed for: [wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local jessie_udp@dns-test-service-3.dns-654.svc.cluster.local] Apr 4 00:13:32.081: INFO: File wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:32.084: INFO: File jessie_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:32.084: INFO: Lookups using dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c failed for: [wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local jessie_udp@dns-test-service-3.dns-654.svc.cluster.local] Apr 4 00:13:37.080: INFO: File wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:37.083: INFO: File jessie_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:37.083: INFO: Lookups using dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c failed for: [wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local jessie_udp@dns-test-service-3.dns-654.svc.cluster.local] Apr 4 00:13:42.081: INFO: File wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:42.085: INFO: File jessie_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:42.085: INFO: Lookups using dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c failed for: [wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local jessie_udp@dns-test-service-3.dns-654.svc.cluster.local] Apr 4 00:13:47.081: INFO: File wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:47.084: INFO: File jessie_udp@dns-test-service-3.dns-654.svc.cluster.local from pod dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 00:13:47.084: INFO: Lookups using dns-654/dns-test-2ceba86c-a16a-462a-939d-040e0741c83c failed for: [wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local jessie_udp@dns-test-service-3.dns-654.svc.cluster.local] Apr 4 00:13:52.084: INFO: DNS probes using dns-test-2ceba86c-a16a-462a-939d-040e0741c83c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-654.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-654.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-654.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-654.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 00:13:58.212: INFO: DNS probes using dns-test-2062ce72-8e18-466b-9e4a-236acb58a25d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:13:58.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-654" for this suite. • [SLOW TEST:43.677 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":136,"skipped":2331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:13:58.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-e5177716-4e10-4de2-ac59-e905625eb6b2 STEP: Creating a pod to test consume secrets Apr 4 00:13:58.698: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648" in namespace "projected-32" to be "Succeeded or Failed" Apr 4 00:13:58.747: INFO: Pod "pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648": Phase="Pending", Reason="", readiness=false. Elapsed: 48.594913ms Apr 4 00:14:00.790: INFO: Pod "pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092080739s Apr 4 00:14:02.793: INFO: Pod "pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095254569s STEP: Saw pod success Apr 4 00:14:02.793: INFO: Pod "pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648" satisfied condition "Succeeded or Failed" Apr 4 00:14:02.796: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648 container secret-volume-test: STEP: delete the pod Apr 4 00:14:02.867: INFO: Waiting for pod pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648 to disappear Apr 4 00:14:02.900: INFO: Pod pod-projected-secrets-5e379753-ba81-4ab1-bb50-3ac803959648 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:14:02.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-32" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2401,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:14:02.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 4 00:14:07.633: INFO: Successfully updated pod "annotationupdate2ef938ab-425e-4bd7-9f3f-fc11e645dc7a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:14:09.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2966" for this suite. • [SLOW TEST:6.750 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2405,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:14:09.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:14:09.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8447" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2420,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:14:09.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-6b70a970-35a1-455a-8126-8e3a0a6fc849 in namespace container-probe-7336 Apr 4 00:14:13.948: INFO: Started pod liveness-6b70a970-35a1-455a-8126-8e3a0a6fc849 in namespace container-probe-7336 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 00:14:13.951: INFO: Initial restart count of pod liveness-6b70a970-35a1-455a-8126-8e3a0a6fc849 is 0 Apr 4 00:14:32.072: INFO: Restart count of pod container-probe-7336/liveness-6b70a970-35a1-455a-8126-8e3a0a6fc849 is now 1 (18.120781042s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:14:32.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7336" for this suite. • [SLOW TEST:22.300 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:14:32.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-1e940612-a47f-4f12-bcac-8d5ff71e68b6 STEP: Creating a pod to test consume configMaps Apr 4 00:14:32.250: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3" in namespace "configmap-5516" to be "Succeeded or Failed" Apr 4 00:14:32.514: INFO: Pod "pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 264.652836ms Apr 4 00:14:34.519: INFO: Pod "pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268858001s Apr 4 00:14:36.523: INFO: Pod "pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.273295111s STEP: Saw pod success Apr 4 00:14:36.523: INFO: Pod "pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3" satisfied condition "Succeeded or Failed" Apr 4 00:14:36.527: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3 container configmap-volume-test: STEP: delete the pod Apr 4 00:14:36.571: INFO: Waiting for pod pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3 to disappear Apr 4 00:14:36.583: INFO: Pod pod-configmaps-f6a9790a-e72f-4bd0-9068-b38977a6d3a3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:14:36.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5516" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:14:36.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-vf6j STEP: Creating a pod to test atomic-volume-subpath Apr 4 00:14:36.696: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vf6j" in namespace "subpath-9499" to be "Succeeded or Failed" Apr 4 00:14:36.700: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Pending", Reason="", readiness=false. Elapsed: 3.974421ms Apr 4 00:14:38.704: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008176428s Apr 4 00:14:40.711: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 4.015254491s Apr 4 00:14:42.715: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 6.019543482s Apr 4 00:14:44.719: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 8.023568138s Apr 4 00:14:46.724: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 10.027903614s Apr 4 00:14:48.728: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 12.032397364s Apr 4 00:14:50.732: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 14.036121436s Apr 4 00:14:52.740: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 16.04437081s Apr 4 00:14:54.744: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 18.04862872s Apr 4 00:14:56.749: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 20.053267421s Apr 4 00:14:58.753: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Running", Reason="", readiness=true. Elapsed: 22.057836296s Apr 4 00:15:00.758: INFO: Pod "pod-subpath-test-configmap-vf6j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062129898s STEP: Saw pod success Apr 4 00:15:00.758: INFO: Pod "pod-subpath-test-configmap-vf6j" satisfied condition "Succeeded or Failed" Apr 4 00:15:00.760: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-vf6j container test-container-subpath-configmap-vf6j: STEP: delete the pod Apr 4 00:15:00.777: INFO: Waiting for pod pod-subpath-test-configmap-vf6j to disappear Apr 4 00:15:00.781: INFO: Pod pod-subpath-test-configmap-vf6j no longer exists STEP: Deleting pod pod-subpath-test-configmap-vf6j Apr 4 00:15:00.781: INFO: Deleting pod "pod-subpath-test-configmap-vf6j" in namespace "subpath-9499" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:00.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9499" for this suite. • [SLOW TEST:24.200 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":142,"skipped":2493,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:00.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 00:15:03.939: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:03.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5904" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2499,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:03.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 4 00:15:04.160: INFO: Waiting up to 5m0s for pod "pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f" in namespace "emptydir-4540" to be "Succeeded or Failed" Apr 4 00:15:04.196: INFO: Pod "pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.662086ms Apr 4 00:15:06.280: INFO: Pod "pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120606493s Apr 4 00:15:08.285: INFO: Pod "pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12497341s STEP: Saw pod success Apr 4 00:15:08.285: INFO: Pod "pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f" satisfied condition "Succeeded or Failed" Apr 4 00:15:08.288: INFO: Trying to get logs from node latest-worker2 pod pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f container test-container: STEP: delete the pod Apr 4 00:15:08.354: INFO: Waiting for pod pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f to disappear Apr 4 00:15:08.368: INFO: Pod pod-164e63f4-39cf-4051-ac6d-a1f8e8f5166f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:08.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4540" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2514,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:08.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a858046f-99dd-4966-b972-54bfe1276dce STEP: Creating a pod to test consume configMaps Apr 4 00:15:08.436: INFO: Waiting up to 5m0s for pod "pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1" in namespace "configmap-7921" to be "Succeeded or Failed" Apr 4 00:15:08.450: INFO: Pod "pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.629403ms Apr 4 00:15:10.454: INFO: Pod "pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017838129s Apr 4 00:15:12.458: INFO: Pod "pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021786956s STEP: Saw pod success Apr 4 00:15:12.458: INFO: Pod "pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1" satisfied condition "Succeeded or Failed" Apr 4 00:15:12.461: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1 container configmap-volume-test: STEP: delete the pod Apr 4 00:15:12.494: INFO: Waiting for pod pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1 to disappear Apr 4 00:15:12.520: INFO: Pod pod-configmaps-3bc1a183-b772-4440-b7f2-29f897e297e1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:12.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7921" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2522,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:12.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 4 00:15:20.715: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 00:15:20.722: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 00:15:22.722: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 00:15:22.727: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 00:15:24.722: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 00:15:24.726: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:24.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1590" for this suite. • [SLOW TEST:12.211 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2531,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:24.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745 Apr 4 00:15:24.806: INFO: Pod name my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745: Found 0 pods out of 1 Apr 4 00:15:29.812: INFO: Pod name my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745: Found 1 pods out of 1 Apr 4 00:15:29.812: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745" are running Apr 4 00:15:29.824: INFO: Pod "my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745-6l2dr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 00:15:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 00:15:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 00:15:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 00:15:24 +0000 UTC Reason: Message:}]) Apr 4 00:15:29.824: INFO: Trying to dial the pod Apr 4 00:15:34.834: INFO: Controller my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745: Got expected result from replica 1 [my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745-6l2dr]: "my-hostname-basic-2e6a8703-c3ad-4ba6-a63a-66fe9e177745-6l2dr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:34.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4487" for this suite. • [SLOW TEST:10.102 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":147,"skipped":2543,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:34.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:40.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5856" for this suite. • [SLOW TEST:5.407 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":148,"skipped":2545,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:40.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-2830/secret-test-3e3f9923-95b0-40af-8944-b0ac6609b87c STEP: Creating a pod to test consume secrets Apr 4 00:15:40.314: INFO: Waiting up to 5m0s for pod "pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85" in namespace "secrets-2830" to be "Succeeded or Failed" Apr 4 00:15:40.328: INFO: Pod "pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85": Phase="Pending", Reason="", readiness=false. Elapsed: 14.544346ms Apr 4 00:15:42.409: INFO: Pod "pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095400844s Apr 4 00:15:44.413: INFO: Pod "pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09980172s STEP: Saw pod success Apr 4 00:15:44.413: INFO: Pod "pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85" satisfied condition "Succeeded or Failed" Apr 4 00:15:44.416: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85 container env-test: STEP: delete the pod Apr 4 00:15:44.480: INFO: Waiting for pod pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85 to disappear Apr 4 00:15:44.483: INFO: Pod pod-configmaps-54b1306b-b792-4097-ae92-314bd16f7f85 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:15:44.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2830" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2563,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:15:44.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Apr 4 00:15:44.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5260' Apr 4 00:15:47.458: INFO: stderr: "" Apr 4 00:15:47.458: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 00:15:47.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Apr 4 00:15:47.580: INFO: stderr: "" Apr 4 00:15:47.580: INFO: stdout: "update-demo-nautilus-5f9b6 update-demo-nautilus-gzghg " Apr 4 00:15:47.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5f9b6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:15:47.662: INFO: stderr: "" Apr 4 00:15:47.662: INFO: stdout: "" Apr 4 00:15:47.662: INFO: update-demo-nautilus-5f9b6 is created but not running Apr 4 00:15:52.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Apr 4 00:15:52.742: INFO: stderr: "" Apr 4 00:15:52.742: INFO: stdout: "update-demo-nautilus-5f9b6 update-demo-nautilus-gzghg " Apr 4 00:15:52.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5f9b6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:15:52.828: INFO: stderr: "" Apr 4 00:15:52.828: INFO: stdout: "true" Apr 4 00:15:52.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5f9b6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:15:52.915: INFO: stderr: "" Apr 4 00:15:52.915: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 00:15:52.915: INFO: validating pod update-demo-nautilus-5f9b6 Apr 4 00:15:52.919: INFO: got data: { "image": "nautilus.jpg" } Apr 4 00:15:52.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 00:15:52.919: INFO: update-demo-nautilus-5f9b6 is verified up and running Apr 4 00:15:52.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzghg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:15:53.020: INFO: stderr: "" Apr 4 00:15:53.020: INFO: stdout: "true" Apr 4 00:15:53.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzghg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:15:53.110: INFO: stderr: "" Apr 4 00:15:53.110: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 00:15:53.110: INFO: validating pod update-demo-nautilus-gzghg Apr 4 00:15:53.114: INFO: got data: { "image": "nautilus.jpg" } Apr 4 00:15:53.114: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 00:15:53.114: INFO: update-demo-nautilus-gzghg is verified up and running STEP: scaling down the replication controller Apr 4 00:15:53.116: INFO: scanned /root for discovery docs: Apr 4 00:15:53.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5260' Apr 4 00:15:54.281: INFO: stderr: "" Apr 4 00:15:54.281: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 00:15:54.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Apr 4 00:15:54.380: INFO: stderr: "" Apr 4 00:15:54.380: INFO: stdout: "update-demo-nautilus-5f9b6 update-demo-nautilus-gzghg " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 4 00:15:59.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Apr 4 00:15:59.475: INFO: stderr: "" Apr 4 00:15:59.475: INFO: stdout: "update-demo-nautilus-gzghg " Apr 4 00:15:59.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzghg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:15:59.566: INFO: stderr: "" Apr 4 00:15:59.566: INFO: stdout: "true" Apr 4 00:15:59.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzghg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:15:59.662: INFO: stderr: "" Apr 4 00:15:59.662: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 00:15:59.662: INFO: validating pod update-demo-nautilus-gzghg Apr 4 00:15:59.666: INFO: got data: { "image": "nautilus.jpg" } Apr 4 00:15:59.666: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 00:15:59.666: INFO: update-demo-nautilus-gzghg is verified up and running STEP: scaling up the replication controller Apr 4 00:15:59.668: INFO: scanned /root for discovery docs: Apr 4 00:15:59.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5260' Apr 4 00:16:00.800: INFO: stderr: "" Apr 4 00:16:00.800: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 00:16:00.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Apr 4 00:16:00.884: INFO: stderr: "" Apr 4 00:16:00.884: INFO: stdout: "update-demo-nautilus-8m2zg update-demo-nautilus-gzghg " Apr 4 00:16:00.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8m2zg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:16:00.968: INFO: stderr: "" Apr 4 00:16:00.968: INFO: stdout: "" Apr 4 00:16:00.968: INFO: update-demo-nautilus-8m2zg is created but not running Apr 4 00:16:05.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5260' Apr 4 00:16:06.076: INFO: stderr: "" Apr 4 00:16:06.076: INFO: stdout: "update-demo-nautilus-8m2zg update-demo-nautilus-gzghg " Apr 4 00:16:06.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8m2zg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:16:06.169: INFO: stderr: "" Apr 4 00:16:06.169: INFO: stdout: "true" Apr 4 00:16:06.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8m2zg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:16:06.268: INFO: stderr: "" Apr 4 00:16:06.268: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 00:16:06.268: INFO: validating pod update-demo-nautilus-8m2zg Apr 4 00:16:06.273: INFO: got data: { "image": "nautilus.jpg" } Apr 4 00:16:06.273: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 00:16:06.273: INFO: update-demo-nautilus-8m2zg is verified up and running Apr 4 00:16:06.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzghg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:16:06.367: INFO: stderr: "" Apr 4 00:16:06.367: INFO: stdout: "true" Apr 4 00:16:06.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzghg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5260' Apr 4 00:16:06.457: INFO: stderr: "" Apr 4 00:16:06.457: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 00:16:06.457: INFO: validating pod update-demo-nautilus-gzghg Apr 4 00:16:06.461: INFO: got data: { "image": "nautilus.jpg" } Apr 4 00:16:06.461: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 00:16:06.461: INFO: update-demo-nautilus-gzghg is verified up and running STEP: using delete to clean up resources Apr 4 00:16:06.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5260' Apr 4 00:16:06.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 00:16:06.569: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 4 00:16:06.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5260' Apr 4 00:16:06.676: INFO: stderr: "No resources found in kubectl-5260 namespace.\n" Apr 4 00:16:06.676: INFO: stdout: "" Apr 4 00:16:06.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5260 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 00:16:06.769: INFO: stderr: "" Apr 4 00:16:06.769: INFO: stdout: "update-demo-nautilus-8m2zg\nupdate-demo-nautilus-gzghg\n" Apr 4 00:16:07.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5260' Apr 4 00:16:07.368: INFO: stderr: "No resources found in kubectl-5260 namespace.\n" Apr 4 00:16:07.368: INFO: stdout: "" Apr 4 00:16:07.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5260 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 00:16:07.528: INFO: stderr: "" Apr 4 00:16:07.528: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:07.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5260" for this suite. • [SLOW TEST:23.182 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":150,"skipped":2565,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:07.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-3cd81281-2dc3-4730-9b46-0910d925a1d2 STEP: Creating a pod to test consume secrets Apr 4 00:16:07.744: INFO: Waiting up to 5m0s for pod "pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe" in namespace "secrets-956" to be "Succeeded or Failed" Apr 4 00:16:07.756: INFO: Pod "pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe": Phase="Pending", Reason="", readiness=false. Elapsed: 11.643535ms Apr 4 00:16:09.759: INFO: Pod "pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015440573s Apr 4 00:16:11.772: INFO: Pod "pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028312389s STEP: Saw pod success Apr 4 00:16:11.772: INFO: Pod "pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe" satisfied condition "Succeeded or Failed" Apr 4 00:16:11.775: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe container secret-volume-test: STEP: delete the pod Apr 4 00:16:11.802: INFO: Waiting for pod pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe to disappear Apr 4 00:16:11.813: INFO: Pod pod-secrets-5ef8aa7f-e3ff-457e-bfb4-aaf7fb9125fe no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:11.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-956" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2568,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:11.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:16:11.989: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9" in namespace "downward-api-1211" to be "Succeeded or Failed" Apr 4 00:16:12.066: INFO: Pod "downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 77.341083ms Apr 4 00:16:14.070: INFO: Pod "downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080517795s Apr 4 00:16:16.074: INFO: Pod "downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.084634286s Apr 4 00:16:18.078: INFO: Pod "downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088503524s STEP: Saw pod success Apr 4 00:16:18.078: INFO: Pod "downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9" satisfied condition "Succeeded or Failed" Apr 4 00:16:18.080: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9 container client-container: STEP: delete the pod Apr 4 00:16:18.121: INFO: Waiting for pod downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9 to disappear Apr 4 00:16:18.137: INFO: Pod downwardapi-volume-31dc49b9-896f-4e5c-a3da-2c6e7cf67bb9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:18.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1211" for this suite. • [SLOW TEST:6.325 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2570,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:18.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-bd7a298b-fb31-4d36-bade-5c00850378b5 STEP: Creating a pod to test consume configMaps Apr 4 00:16:18.256: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e" in namespace "projected-406" to be "Succeeded or Failed" Apr 4 00:16:18.265: INFO: Pod "pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.224737ms Apr 4 00:16:20.269: INFO: Pod "pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012895473s Apr 4 00:16:22.273: INFO: Pod "pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017458054s STEP: Saw pod success Apr 4 00:16:22.273: INFO: Pod "pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e" satisfied condition "Succeeded or Failed" Apr 4 00:16:22.276: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e container projected-configmap-volume-test: STEP: delete the pod Apr 4 00:16:22.307: INFO: Waiting for pod pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e to disappear Apr 4 00:16:22.319: INFO: Pod pod-projected-configmaps-974f8448-4aaf-4662-88ef-dc26cbd3cb5e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:22.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-406" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2605,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:22.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:16:22.373: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 4 00:16:24.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6045 create -f -' Apr 4 00:16:27.074: INFO: stderr: "" Apr 4 00:16:27.074: INFO: stdout: "e2e-test-crd-publish-openapi-1042-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 4 00:16:27.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6045 delete e2e-test-crd-publish-openapi-1042-crds test-cr' Apr 4 00:16:27.190: INFO: stderr: "" Apr 4 00:16:27.190: INFO: stdout: "e2e-test-crd-publish-openapi-1042-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 4 00:16:27.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6045 apply -f -' Apr 4 00:16:27.432: INFO: stderr: "" Apr 4 00:16:27.432: INFO: stdout: "e2e-test-crd-publish-openapi-1042-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 4 00:16:27.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6045 delete e2e-test-crd-publish-openapi-1042-crds test-cr' Apr 4 00:16:27.544: INFO: stderr: "" Apr 4 00:16:27.544: INFO: stdout: "e2e-test-crd-publish-openapi-1042-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 4 00:16:27.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1042-crds' Apr 4 00:16:27.779: INFO: stderr: "" Apr 4 00:16:27.779: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1042-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:30.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6045" for this suite. • [SLOW TEST:8.400 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":154,"skipped":2613,"failed":0} S ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:30.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Apr 4 00:16:30.828: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7215" to be "Succeeded or Failed" Apr 4 00:16:30.832: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.456908ms Apr 4 00:16:32.836: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007690784s Apr 4 00:16:34.840: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011559012s Apr 4 00:16:36.844: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01511543s STEP: Saw pod success Apr 4 00:16:36.844: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 4 00:16:36.847: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 4 00:16:36.863: INFO: Waiting for pod pod-host-path-test to disappear Apr 4 00:16:36.868: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:36.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7215" for this suite. • [SLOW TEST:6.146 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2614,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:36.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:16:36.930: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a591350b-442e-4a9c-8c9c-b2c05195f0e1" in namespace "security-context-test-8206" to be "Succeeded or Failed" Apr 4 00:16:36.945: INFO: Pod "busybox-user-65534-a591350b-442e-4a9c-8c9c-b2c05195f0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.263203ms Apr 4 00:16:38.958: INFO: Pod "busybox-user-65534-a591350b-442e-4a9c-8c9c-b2c05195f0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028521926s Apr 4 00:16:40.963: INFO: Pod "busybox-user-65534-a591350b-442e-4a9c-8c9c-b2c05195f0e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032929162s Apr 4 00:16:40.963: INFO: Pod "busybox-user-65534-a591350b-442e-4a9c-8c9c-b2c05195f0e1" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:40.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8206" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:40.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:16:41.037: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 4 00:16:43.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 create -f -' Apr 4 00:16:46.743: INFO: stderr: "" Apr 4 00:16:46.743: INFO: stdout: "e2e-test-crd-publish-openapi-1662-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 4 00:16:46.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 delete e2e-test-crd-publish-openapi-1662-crds test-foo' Apr 4 00:16:46.840: INFO: stderr: "" Apr 4 00:16:46.840: INFO: stdout: "e2e-test-crd-publish-openapi-1662-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 4 00:16:46.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 apply -f -' Apr 4 00:16:47.074: INFO: stderr: "" Apr 4 00:16:47.074: INFO: stdout: "e2e-test-crd-publish-openapi-1662-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 4 00:16:47.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 delete e2e-test-crd-publish-openapi-1662-crds test-foo' Apr 4 00:16:47.189: INFO: stderr: "" Apr 4 00:16:47.189: INFO: stdout: "e2e-test-crd-publish-openapi-1662-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 4 00:16:47.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 create -f -' Apr 4 00:16:47.419: INFO: rc: 1 Apr 4 00:16:47.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 apply -f -' Apr 4 00:16:47.627: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 4 00:16:47.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 create -f -' Apr 4 00:16:47.922: INFO: rc: 1 Apr 4 00:16:47.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9682 apply -f -' Apr 4 00:16:48.147: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 4 00:16:48.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1662-crds' Apr 4 00:16:48.379: INFO: stderr: "" Apr 4 00:16:48.379: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 4 00:16:48.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1662-crds.metadata' Apr 4 00:16:48.612: INFO: stderr: "" Apr 4 00:16:48.612: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 4 00:16:48.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1662-crds.spec' Apr 4 00:16:48.851: INFO: stderr: "" Apr 4 00:16:48.852: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 4 00:16:48.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1662-crds.spec.bars' Apr 4 00:16:49.086: INFO: stderr: "" Apr 4 00:16:49.086: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 4 00:16:49.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1662-crds.spec.bars2' Apr 4 00:16:49.288: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:51.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9682" for this suite. • [SLOW TEST:10.228 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":157,"skipped":2657,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:51.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Apr 4 00:16:51.273: INFO: Waiting up to 5m0s for pod "var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f" in namespace "var-expansion-3799" to be "Succeeded or Failed" Apr 4 00:16:51.299: INFO: Pod "var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.601171ms Apr 4 00:16:53.303: INFO: Pod "var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02974206s Apr 4 00:16:55.307: INFO: Pod "var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033887009s STEP: Saw pod success Apr 4 00:16:55.307: INFO: Pod "var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f" satisfied condition "Succeeded or Failed" Apr 4 00:16:55.310: INFO: Trying to get logs from node latest-worker pod var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f container dapi-container: STEP: delete the pod Apr 4 00:16:55.331: INFO: Waiting for pod var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f to disappear Apr 4 00:16:55.384: INFO: Pod var-expansion-975c1ce1-fe25-4a06-9e22-05be3cc06f1f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:16:55.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3799" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2658,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:16:55.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:17:00.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7280" for this suite. • [SLOW TEST:5.136 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":159,"skipped":2666,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:17:00.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-287e5ce9-013c-4966-a36c-8ee29d155c72 STEP: Creating a pod to test consume configMaps Apr 4 00:17:00.638: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310" in namespace "projected-3796" to be "Succeeded or Failed" Apr 4 00:17:00.641: INFO: Pod "pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469405ms Apr 4 00:17:02.645: INFO: Pod "pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007731496s Apr 4 00:17:04.650: INFO: Pod "pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012372132s STEP: Saw pod success Apr 4 00:17:04.650: INFO: Pod "pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310" satisfied condition "Succeeded or Failed" Apr 4 00:17:04.654: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310 container projected-configmap-volume-test: STEP: delete the pod Apr 4 00:17:04.688: INFO: Waiting for pod pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310 to disappear Apr 4 00:17:04.701: INFO: Pod pod-projected-configmaps-330eb096-1ab6-433b-af99-4de2db48a310 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:17:04.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3796" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2677,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:17:04.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 4 00:17:05.341: INFO: Pod name wrapped-volume-race-494aecd4-9465-4621-ad14-0c30711118f1: Found 0 pods out of 5 Apr 4 00:17:10.349: INFO: Pod name wrapped-volume-race-494aecd4-9465-4621-ad14-0c30711118f1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-494aecd4-9465-4621-ad14-0c30711118f1 in namespace emptydir-wrapper-5639, will wait for the garbage collector to delete the pods Apr 4 00:17:24.486: INFO: Deleting ReplicationController wrapped-volume-race-494aecd4-9465-4621-ad14-0c30711118f1 took: 11.611627ms Apr 4 00:17:24.786: INFO: Terminating ReplicationController wrapped-volume-race-494aecd4-9465-4621-ad14-0c30711118f1 pods took: 300.259156ms STEP: Creating RC which spawns configmap-volume pods Apr 4 00:17:33.816: INFO: Pod name wrapped-volume-race-001b473b-6e6e-4319-9452-c7e9e4a5f552: Found 0 pods out of 5 Apr 4 00:17:38.824: INFO: Pod name wrapped-volume-race-001b473b-6e6e-4319-9452-c7e9e4a5f552: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-001b473b-6e6e-4319-9452-c7e9e4a5f552 in namespace emptydir-wrapper-5639, will wait for the garbage collector to delete the pods Apr 4 00:17:53.052: INFO: Deleting ReplicationController wrapped-volume-race-001b473b-6e6e-4319-9452-c7e9e4a5f552 took: 14.258387ms Apr 4 00:17:53.353: INFO: Terminating ReplicationController wrapped-volume-race-001b473b-6e6e-4319-9452-c7e9e4a5f552 pods took: 300.250552ms STEP: Creating RC which spawns configmap-volume pods Apr 4 00:18:04.087: INFO: Pod name wrapped-volume-race-b73fe48c-1136-4832-92ea-7fec6cd69458: Found 0 pods out of 5 Apr 4 00:18:09.093: INFO: Pod name wrapped-volume-race-b73fe48c-1136-4832-92ea-7fec6cd69458: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b73fe48c-1136-4832-92ea-7fec6cd69458 in namespace emptydir-wrapper-5639, will wait for the garbage collector to delete the pods Apr 4 00:18:23.181: INFO: Deleting ReplicationController wrapped-volume-race-b73fe48c-1136-4832-92ea-7fec6cd69458 took: 7.774227ms Apr 4 00:18:23.482: INFO: Terminating ReplicationController wrapped-volume-race-b73fe48c-1136-4832-92ea-7fec6cd69458 pods took: 300.266426ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:18:33.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5639" for this suite. • [SLOW TEST:88.942 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":161,"skipped":2682,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:18:33.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:18:33.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 4 00:18:33.829: INFO: stderr: "" Apr 4 00:18:33.829: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:18:33.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9682" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":162,"skipped":2688,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:18:33.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 4 00:18:33.926: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 4 00:18:33.931: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 4 00:18:33.931: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 4 00:18:33.936: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 4 00:18:33.936: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 4 00:18:33.973: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 4 00:18:33.973: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 4 00:18:41.246: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:18:41.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-4743" for this suite. • [SLOW TEST:7.457 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":163,"skipped":2705,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:18:41.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:18:41.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4" in namespace "downward-api-4887" to be "Succeeded or Failed" Apr 4 00:18:41.427: INFO: Pod "downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832193ms Apr 4 00:18:43.431: INFO: Pod "downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007695685s Apr 4 00:18:45.435: INFO: Pod "downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011517793s STEP: Saw pod success Apr 4 00:18:45.435: INFO: Pod "downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4" satisfied condition "Succeeded or Failed" Apr 4 00:18:45.438: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4 container client-container: STEP: delete the pod Apr 4 00:18:45.499: INFO: Waiting for pod downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4 to disappear Apr 4 00:18:45.502: INFO: Pod downwardapi-volume-6d1dd603-3afa-4c40-87d1-3e3562db32f4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:18:45.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4887" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2717,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:18:45.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 4 00:18:45.567: INFO: Waiting up to 5m0s for pod "pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6" in namespace "emptydir-4708" to be "Succeeded or Failed" Apr 4 00:18:45.571: INFO: Pod "pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.816511ms Apr 4 00:18:47.625: INFO: Pod "pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057508493s Apr 4 00:18:49.628: INFO: Pod "pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06086174s Apr 4 00:18:51.632: INFO: Pod "pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064515634s STEP: Saw pod success Apr 4 00:18:51.632: INFO: Pod "pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6" satisfied condition "Succeeded or Failed" Apr 4 00:18:51.635: INFO: Trying to get logs from node latest-worker pod pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6 container test-container: STEP: delete the pod Apr 4 00:18:51.656: INFO: Waiting for pod pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6 to disappear Apr 4 00:18:51.673: INFO: Pod pod-36cff2ce-aec2-43e5-8327-14109cbaf8d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:18:51.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4708" for this suite. • [SLOW TEST:6.168 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2732,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:18:51.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:18:52.446: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:18:54.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556332, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556332, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556332, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556332, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:18:57.486: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:18:57.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4347" for this suite. STEP: Destroying namespace "webhook-4347-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.995 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":166,"skipped":2737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:18:57.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0404 00:19:37.862661 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 00:19:37.862: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:19:37.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6341" for this suite. • [SLOW TEST:40.194 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":167,"skipped":2784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:19:37.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Apr 4 00:19:37.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8601' Apr 4 00:19:38.261: INFO: stderr: "" Apr 4 00:19:38.261: INFO: stdout: "pod/pause created\n" Apr 4 00:19:38.261: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 4 00:19:38.261: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8601" to be "running and ready" Apr 4 00:19:38.276: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.765269ms Apr 4 00:19:40.281: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020612227s Apr 4 00:19:42.286: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.025060145s Apr 4 00:19:42.286: INFO: Pod "pause" satisfied condition "running and ready" Apr 4 00:19:42.286: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Apr 4 00:19:42.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8601' Apr 4 00:19:42.385: INFO: stderr: "" Apr 4 00:19:42.385: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 4 00:19:42.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8601' Apr 4 00:19:42.476: INFO: stderr: "" Apr 4 00:19:42.476: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 4 00:19:42.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8601' Apr 4 00:19:42.572: INFO: stderr: "" Apr 4 00:19:42.572: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 4 00:19:42.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8601' Apr 4 00:19:42.659: INFO: stderr: "" Apr 4 00:19:42.659: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Apr 4 00:19:42.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8601' Apr 4 00:19:42.773: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 00:19:42.774: INFO: stdout: "pod \"pause\" force deleted\n" Apr 4 00:19:42.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8601' Apr 4 00:19:43.126: INFO: stderr: "No resources found in kubectl-8601 namespace.\n" Apr 4 00:19:43.126: INFO: stdout: "" Apr 4 00:19:43.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8601 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 00:19:43.290: INFO: stderr: "" Apr 4 00:19:43.290: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:19:43.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8601" for this suite. • [SLOW TEST:5.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":168,"skipped":2960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:19:43.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8575 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8575;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8575 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8575;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8575.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8575.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8575.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8575.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8575.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8575.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8575.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.224_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8575 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8575;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8575 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8575;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8575.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8575.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8575.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8575.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8575.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8575.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8575.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8575.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8575.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 224.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.224_udp@PTR;check="$$(dig +tcp +noall +answer +search 224.18.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.18.224_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 00:19:51.806: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.809: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.812: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.815: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.817: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.819: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.822: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.824: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.843: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.845: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.848: INFO: Unable to read jessie_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.850: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.852: INFO: Unable to read jessie_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.855: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.857: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.859: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:51.875: INFO: Lookups using dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8575 wheezy_tcp@dns-test-service.dns-8575 wheezy_udp@dns-test-service.dns-8575.svc wheezy_tcp@dns-test-service.dns-8575.svc wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8575 jessie_tcp@dns-test-service.dns-8575 jessie_udp@dns-test-service.dns-8575.svc jessie_tcp@dns-test-service.dns-8575.svc jessie_udp@_http._tcp.dns-test-service.dns-8575.svc jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc] Apr 4 00:19:56.880: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.884: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.901: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.904: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.927: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.930: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.934: INFO: Unable to read jessie_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.938: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.941: INFO: Unable to read jessie_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.948: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:19:56.969: INFO: Lookups using dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8575 wheezy_tcp@dns-test-service.dns-8575 wheezy_udp@dns-test-service.dns-8575.svc wheezy_tcp@dns-test-service.dns-8575.svc wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8575 jessie_tcp@dns-test-service.dns-8575 jessie_udp@dns-test-service.dns-8575.svc jessie_tcp@dns-test-service.dns-8575.svc jessie_udp@_http._tcp.dns-test-service.dns-8575.svc jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc] Apr 4 00:20:01.880: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.882: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.885: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.888: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.891: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.919: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.922: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.925: INFO: Unable to read jessie_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.927: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.930: INFO: Unable to read jessie_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.933: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.936: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.939: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:01.967: INFO: Lookups using dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8575 wheezy_tcp@dns-test-service.dns-8575 wheezy_udp@dns-test-service.dns-8575.svc wheezy_tcp@dns-test-service.dns-8575.svc wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8575 jessie_tcp@dns-test-service.dns-8575 jessie_udp@dns-test-service.dns-8575.svc jessie_tcp@dns-test-service.dns-8575.svc jessie_udp@_http._tcp.dns-test-service.dns-8575.svc jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc] Apr 4 00:20:06.880: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.884: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.899: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.902: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.925: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.928: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.944: INFO: Unable to read jessie_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.950: INFO: Unable to read jessie_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.953: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.957: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.960: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:06.978: INFO: Lookups using dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8575 wheezy_tcp@dns-test-service.dns-8575 wheezy_udp@dns-test-service.dns-8575.svc wheezy_tcp@dns-test-service.dns-8575.svc wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8575 jessie_tcp@dns-test-service.dns-8575 jessie_udp@dns-test-service.dns-8575.svc jessie_tcp@dns-test-service.dns-8575.svc jessie_udp@_http._tcp.dns-test-service.dns-8575.svc jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc] Apr 4 00:20:11.880: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.884: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.888: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.920: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.923: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.925: INFO: Unable to read jessie_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.931: INFO: Unable to read jessie_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.937: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.940: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:11.957: INFO: Lookups using dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8575 wheezy_tcp@dns-test-service.dns-8575 wheezy_udp@dns-test-service.dns-8575.svc wheezy_tcp@dns-test-service.dns-8575.svc wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8575 jessie_tcp@dns-test-service.dns-8575 jessie_udp@dns-test-service.dns-8575.svc jessie_tcp@dns-test-service.dns-8575.svc jessie_udp@_http._tcp.dns-test-service.dns-8575.svc jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc] Apr 4 00:20:16.880: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.883: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.887: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.894: INFO: Unable to read wheezy_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.900: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.903: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.935: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.938: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.941: INFO: Unable to read jessie_udp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.962: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575 from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.965: INFO: Unable to read jessie_udp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.971: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.973: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc from pod dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834: the server could not find the requested resource (get pods dns-test-b2c35552-45ee-4650-a833-b3fdba391834) Apr 4 00:20:16.989: INFO: Lookups using dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8575 wheezy_tcp@dns-test-service.dns-8575 wheezy_udp@dns-test-service.dns-8575.svc wheezy_tcp@dns-test-service.dns-8575.svc wheezy_udp@_http._tcp.dns-test-service.dns-8575.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8575.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8575 jessie_tcp@dns-test-service.dns-8575 jessie_udp@dns-test-service.dns-8575.svc jessie_tcp@dns-test-service.dns-8575.svc jessie_udp@_http._tcp.dns-test-service.dns-8575.svc jessie_tcp@_http._tcp.dns-test-service.dns-8575.svc] Apr 4 00:20:21.966: INFO: DNS probes using dns-8575/dns-test-b2c35552-45ee-4650-a833-b3fdba391834 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:20:22.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8575" for this suite. • [SLOW TEST:39.223 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":169,"skipped":2991,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:20:22.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:20:22.675: INFO: Creating deployment "webserver-deployment" Apr 4 00:20:22.693: INFO: Waiting for observed generation 1 Apr 4 00:20:24.774: INFO: Waiting for all required pods to come up Apr 4 00:20:24.778: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 4 00:20:33.075: INFO: Waiting for deployment "webserver-deployment" to complete Apr 4 00:20:33.081: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 4 00:20:33.087: INFO: Updating deployment webserver-deployment Apr 4 00:20:33.087: INFO: Waiting for observed generation 2 Apr 4 00:20:35.099: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 4 00:20:35.102: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 4 00:20:35.105: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 4 00:20:35.111: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 4 00:20:35.111: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 4 00:20:35.113: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 4 00:20:35.117: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 4 00:20:35.117: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 4 00:20:35.123: INFO: Updating deployment webserver-deployment Apr 4 00:20:35.123: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 4 00:20:35.178: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 4 00:20:35.219: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 4 00:20:35.464: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5065 /apis/apps/v1/namespaces/deployment-5065/deployments/webserver-deployment dff887bc-e63d-44f5-9847-1f2777fac97f 5204996 3 2020-04-04 00:20:22 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003905498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-04 00:20:33 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-04 00:20:35 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 4 00:20:35.531: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5065 /apis/apps/v1/namespaces/deployment-5065/replicasets/webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 5205049 3 2020-04-04 00:20:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment dff887bc-e63d-44f5-9847-1f2777fac97f 0xc00284d747 0xc00284d748}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00284d7b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:20:35.531: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 4 00:20:35.531: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5065 /apis/apps/v1/namespaces/deployment-5065/replicasets/webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 5205024 3 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment dff887bc-e63d-44f5-9847-1f2777fac97f 0xc00284d687 0xc00284d688}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00284d6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:20:35.553: INFO: Pod "webserver-deployment-595b5b9587-46zhc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-46zhc webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-46zhc 686ac0e8-6c89-4db7-a860-b054bd37aee6 5205035 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238c227 0xc00238c228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.553: INFO: Pod "webserver-deployment-595b5b9587-55mr9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-55mr9 webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-55mr9 97f6e512-d88f-4cb5-b5bf-293a6d8d78c5 5204911 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238c347 0xc00238c348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.7,StartTime:2020-04-04 00:20:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3011480c72923c4f7285599843a938255c8243ce8b465dd7b5cc6f4d40bd831b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.553: INFO: Pod "webserver-deployment-595b5b9587-6b7j8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6b7j8 webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-6b7j8 c84afca3-87a3-4ef6-ba89-dcf14d1ae2ca 5204884 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238c4e7 0xc00238c4e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.54,StartTime:2020-04-04 00:20:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6b948470f418df3209682f19027317a7d0d2454a6f7b0f20d753954e6146813c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.554: INFO: Pod "webserver-deployment-595b5b9587-7mrn9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7mrn9 webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-7mrn9 3f55da72-3615-414d-b5b3-c60cfe22cc63 5205002 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238c677 0xc00238c678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.554: INFO: Pod "webserver-deployment-595b5b9587-7z2j8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7z2j8 webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-7z2j8 7e24a767-55a8-40bf-9800-37c0a82463a6 5204847 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238c7d7 0xc00238c7d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.53,StartTime:2020-04-04 00:20:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b0a460556abb468e31c44b1ae1be47b88f45ebabcbd65c4524f37ecdba89fc2e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.554: INFO: Pod "webserver-deployment-595b5b9587-87gbh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-87gbh webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-87gbh 5292760f-8106-4ac2-a197-f4c9befe2097 5205039 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238c967 0xc00238c968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 00:20:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.554: INFO: Pod "webserver-deployment-595b5b9587-96jng" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-96jng webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-96jng 9c72d7e4-bde0-49f7-9d1b-906f28126dac 5204879 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238cac7 0xc00238cac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.3,StartTime:2020-04-04 00:20:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bb847a61f4f3b77ed54013a7dcbf0da97b0b1e7ab2682bd884d38d7b97c20ab3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.554: INFO: Pod "webserver-deployment-595b5b9587-9ghks" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9ghks webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-9ghks 0f551632-afdf-45eb-b5b5-af2412640626 5205014 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238cc47 0xc00238cc48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.555: INFO: Pod "webserver-deployment-595b5b9587-cm8bf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cm8bf webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-cm8bf b386c399-daef-43da-a343-b81705ed4c5a 5205032 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238cd67 0xc00238cd68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.555: INFO: Pod "webserver-deployment-595b5b9587-f4m6q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f4m6q webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-f4m6q bc7436a8-7a8a-47ff-8ae9-af00cf3e858b 5205025 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238ce87 0xc00238ce88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.555: INFO: Pod "webserver-deployment-595b5b9587-g8psh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g8psh webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-g8psh 9b6f7f9b-1c65-411b-b316-0df516365b94 5205018 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238cfa7 0xc00238cfa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.555: INFO: Pod "webserver-deployment-595b5b9587-gdbl8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gdbl8 webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-gdbl8 c5e4924a-c868-4107-a3e3-dee1524d57c7 5205026 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238d0c7 0xc00238d0c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.555: INFO: Pod "webserver-deployment-595b5b9587-jbmxw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jbmxw webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-jbmxw 22cf3df8-62de-4f6a-9bb6-2faebb368f8d 5204914 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238d1e7 0xc00238d1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.6,StartTime:2020-04-04 00:20:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://494a5b066a0c5deb3829998783ea4af42a1906471cdd4abe194bcfd3efd77a6f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.555: INFO: Pod "webserver-deployment-595b5b9587-mcv57" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mcv57 webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-mcv57 a8cc60ba-9d20-46b1-8f87-e3db321a69a1 5205011 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238d367 0xc00238d368}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.555: INFO: Pod "webserver-deployment-595b5b9587-ntlwt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ntlwt webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-ntlwt a3f4e0b2-68ed-487d-aa29-4781697b8819 5204875 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238d487 0xc00238d488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.4,StartTime:2020-04-04 00:20:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c27bc80680478820eb071914d5154413f27afc4d060527944b8ace5f43159f31,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.556: INFO: Pod "webserver-deployment-595b5b9587-nwbt2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nwbt2 webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-nwbt2 70b09b8d-e9c1-4cba-9420-ceb0707e36b9 5204870 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238d607 0xc00238d608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.5,StartTime:2020-04-04 00:20:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4877e949d8061ea32ea95048a8379ba1426f9e8788617ed7dcd5a8fd2515e468,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.556: INFO: Pod "webserver-deployment-595b5b9587-qbkwd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qbkwd webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-qbkwd f02ed987-2d61-4fc0-a50d-0d5349c57143 5205022 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238d7a7 0xc00238d7a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.556: INFO: Pod "webserver-deployment-595b5b9587-rhrqt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rhrqt webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-rhrqt 10356a63-2d9b-44c8-adc2-4a483498143e 5205053 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238d8c7 0xc00238d8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 00:20:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.556: INFO: Pod "webserver-deployment-595b5b9587-tlj8q" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tlj8q webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-tlj8q e13c249d-a3e5-407d-a3fc-1da5e087ad35 5204905 0 2020-04-04 00:20:22 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238da37 0xc00238da38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.56,StartTime:2020-04-04 00:20:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:20:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://64d2db33f0011b7894dea7eaedf2522b0a48c6c7448153e443f98e61fc393ed4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.556: INFO: Pod "webserver-deployment-595b5b9587-tzwsw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tzwsw webserver-deployment-595b5b9587- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-595b5b9587-tzwsw 1e5ff197-a5e5-46fd-acc8-f0c362824be3 5204993 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 6316c809-559d-4b6e-a444-6818dc66d652 0xc00238dbd7 0xc00238dbd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.556: INFO: Pod "webserver-deployment-c7997dcc8-4h7tm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4h7tm webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-4h7tm 7ef73822-a4a0-4a11-b8f5-a04c6c0f5e4b 5204952 0 2020-04-04 00:20:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc00238dd07 0xc00238dd08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 00:20:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.557: INFO: Pod "webserver-deployment-c7997dcc8-7njgx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7njgx webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-7njgx 6378490c-7922-4498-9d8f-f7e2bd83129c 5205034 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc00238def7 0xc00238def8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.557: INFO: Pod "webserver-deployment-c7997dcc8-9jmzv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9jmzv webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-9jmzv 37fd95e9-7b47-44b5-81c5-1fb596628d0e 5205020 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a86027 0xc003a86028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.557: INFO: Pod "webserver-deployment-c7997dcc8-dg9cl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dg9cl webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-dg9cl 8b580a8c-2cdc-431d-ade9-9d490f50447e 5205051 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a86527 0xc003a86528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-04 00:20:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.557: INFO: Pod "webserver-deployment-c7997dcc8-f945q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f945q webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-f945q f074b9e4-ebff-46e8-822e-c88815763d5f 5204964 0 2020-04-04 00:20:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a866a7 0xc003a866a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 00:20:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.557: INFO: Pod "webserver-deployment-c7997dcc8-h24p8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-h24p8 webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-h24p8 39e82609-07ff-4d35-bc4f-88a6b6209c35 5204945 0 2020-04-04 00:20:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a86827 0xc003a86828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-04 00:20:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.558: INFO: Pod "webserver-deployment-c7997dcc8-kffdb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kffdb webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-kffdb 8342f0ae-c8a8-4f59-acc2-cacd7337b75f 5205031 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a869a7 0xc003a869a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.558: INFO: Pod "webserver-deployment-c7997dcc8-klwlc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-klwlc webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-klwlc 3aea94ea-d07a-4312-9268-743082197d87 5204967 0 2020-04-04 00:20:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a86ae7 0xc003a86ae8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-04 00:20:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.558: INFO: Pod "webserver-deployment-c7997dcc8-w2fx4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w2fx4 webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-w2fx4 1bcd0272-562c-44c4-9411-b8ad29f25c62 5205041 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a86c67 0xc003a86c68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.558: INFO: Pod "webserver-deployment-c7997dcc8-wjpgh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wjpgh webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-wjpgh 93b09fb2-b1ff-49b3-9a75-c17e9b2c7652 5205015 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a86da7 0xc003a86da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.558: INFO: Pod "webserver-deployment-c7997dcc8-x222p" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x222p webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-x222p b51c4df6-64a8-453b-bebf-e0ec6c03bba7 5205033 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a86ed7 0xc003a86ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.558: INFO: Pod "webserver-deployment-c7997dcc8-zqhwh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zqhwh webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-zqhwh 79eb53d5-a64a-43ab-a16e-4d422bab4ab8 5204969 0 2020-04-04 00:20:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a87007 0xc003a87008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 00:20:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:20:35.558: INFO: Pod "webserver-deployment-c7997dcc8-zxmv8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zxmv8 webserver-deployment-c7997dcc8- deployment-5065 /api/v1/namespaces/deployment-5065/pods/webserver-deployment-c7997dcc8-zxmv8 b04b09a3-61fd-4297-9d82-b497b1d5d4d9 5205017 0 2020-04-04 00:20:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 dbcb2bec-7b16-46da-af44-2406d93008de 0xc003a87187 0xc003a87188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rzt9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rzt9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rzt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:20:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:20:35.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5065" for this suite. • [SLOW TEST:13.217 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":170,"skipped":2999,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:20:35.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 00:20:53.659: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:20:54.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2676" for this suite. • [SLOW TEST:18.649 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":3008,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:20:54.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:20:59.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8101" for this suite. • [SLOW TEST:5.024 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":3008,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:20:59.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:20:59.517: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:21:00.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6577" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":173,"skipped":3027,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:21:00.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:21:04.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9219" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:21:04.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4028 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-4028 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4028 Apr 4 00:21:04.875: INFO: Found 0 stateful pods, waiting for 1 Apr 4 00:21:14.879: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 4 00:21:14.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4028 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:21:15.110: INFO: stderr: "I0404 00:21:15.006279 2501 log.go:172] (0xc0009e9970) (0xc000a36780) Create stream\nI0404 00:21:15.006342 2501 log.go:172] (0xc0009e9970) (0xc000a36780) Stream added, broadcasting: 1\nI0404 00:21:15.010940 2501 log.go:172] (0xc0009e9970) Reply frame received for 1\nI0404 00:21:15.010984 2501 log.go:172] (0xc0009e9970) (0xc000589680) Create stream\nI0404 00:21:15.010996 2501 log.go:172] (0xc0009e9970) (0xc000589680) Stream added, broadcasting: 3\nI0404 00:21:15.012121 2501 log.go:172] (0xc0009e9970) Reply frame received for 3\nI0404 00:21:15.012182 2501 log.go:172] (0xc0009e9970) (0xc000390aa0) Create stream\nI0404 00:21:15.012213 2501 log.go:172] (0xc0009e9970) (0xc000390aa0) Stream added, broadcasting: 5\nI0404 00:21:15.013473 2501 log.go:172] (0xc0009e9970) Reply frame received for 5\nI0404 00:21:15.076041 2501 log.go:172] (0xc0009e9970) Data frame received for 5\nI0404 00:21:15.076075 2501 log.go:172] (0xc000390aa0) (5) Data frame handling\nI0404 00:21:15.076091 2501 log.go:172] (0xc000390aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:21:15.103140 2501 log.go:172] (0xc0009e9970) Data frame received for 5\nI0404 00:21:15.103178 2501 log.go:172] (0xc000390aa0) (5) Data frame handling\nI0404 00:21:15.103216 2501 log.go:172] (0xc0009e9970) Data frame received for 3\nI0404 00:21:15.103229 2501 log.go:172] (0xc000589680) (3) Data frame handling\nI0404 00:21:15.103242 2501 log.go:172] (0xc000589680) (3) Data frame sent\nI0404 00:21:15.103272 2501 log.go:172] (0xc0009e9970) Data frame received for 3\nI0404 00:21:15.103284 2501 log.go:172] (0xc000589680) (3) Data frame handling\nI0404 00:21:15.104968 2501 log.go:172] (0xc0009e9970) Data frame received for 1\nI0404 00:21:15.105000 2501 log.go:172] (0xc000a36780) (1) Data frame handling\nI0404 00:21:15.105014 2501 log.go:172] (0xc000a36780) (1) Data frame sent\nI0404 00:21:15.105029 2501 log.go:172] (0xc0009e9970) (0xc000a36780) Stream removed, broadcasting: 1\nI0404 00:21:15.105057 2501 log.go:172] (0xc0009e9970) Go away received\nI0404 00:21:15.105655 2501 log.go:172] (0xc0009e9970) (0xc000a36780) Stream removed, broadcasting: 1\nI0404 00:21:15.105678 2501 log.go:172] (0xc0009e9970) (0xc000589680) Stream removed, broadcasting: 3\nI0404 00:21:15.105691 2501 log.go:172] (0xc0009e9970) (0xc000390aa0) Stream removed, broadcasting: 5\n" Apr 4 00:21:15.110: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:21:15.110: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:21:15.113: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 4 00:21:25.118: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:21:25.118: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:21:25.133: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:25.133: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:25.133: INFO: Apr 4 00:21:25.133: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 4 00:21:26.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995605606s Apr 4 00:21:27.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991295849s Apr 4 00:21:28.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987772814s Apr 4 00:21:29.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974040485s Apr 4 00:21:30.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.956450619s Apr 4 00:21:31.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.952646602s Apr 4 00:21:32.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948571479s Apr 4 00:21:33.224: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.907519302s Apr 4 00:21:34.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 903.918747ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4028 Apr 4 00:21:35.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:21:35.436: INFO: stderr: "I0404 00:21:35.372128 2521 log.go:172] (0xc000741a20) (0xc000666be0) Create stream\nI0404 00:21:35.372194 2521 log.go:172] (0xc000741a20) (0xc000666be0) Stream added, broadcasting: 1\nI0404 00:21:35.374574 2521 log.go:172] (0xc000741a20) Reply frame received for 1\nI0404 00:21:35.374615 2521 log.go:172] (0xc000741a20) (0xc000666c80) Create stream\nI0404 00:21:35.374624 2521 log.go:172] (0xc000741a20) (0xc000666c80) Stream added, broadcasting: 3\nI0404 00:21:35.375460 2521 log.go:172] (0xc000741a20) Reply frame received for 3\nI0404 00:21:35.375514 2521 log.go:172] (0xc000741a20) (0xc000666d20) Create stream\nI0404 00:21:35.375529 2521 log.go:172] (0xc000741a20) (0xc000666d20) Stream added, broadcasting: 5\nI0404 00:21:35.376400 2521 log.go:172] (0xc000741a20) Reply frame received for 5\nI0404 00:21:35.428907 2521 log.go:172] (0xc000741a20) Data frame received for 5\nI0404 00:21:35.428939 2521 log.go:172] (0xc000666d20) (5) Data frame handling\nI0404 00:21:35.428950 2521 log.go:172] (0xc000666d20) (5) Data frame sent\nI0404 00:21:35.428958 2521 log.go:172] (0xc000741a20) Data frame received for 5\nI0404 00:21:35.428965 2521 log.go:172] (0xc000666d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 00:21:35.428986 2521 log.go:172] (0xc000741a20) Data frame received for 3\nI0404 00:21:35.428993 2521 log.go:172] (0xc000666c80) (3) Data frame handling\nI0404 00:21:35.429001 2521 log.go:172] (0xc000666c80) (3) Data frame sent\nI0404 00:21:35.429009 2521 log.go:172] (0xc000741a20) Data frame received for 3\nI0404 00:21:35.429016 2521 log.go:172] (0xc000666c80) (3) Data frame handling\nI0404 00:21:35.430728 2521 log.go:172] (0xc000741a20) Data frame received for 1\nI0404 00:21:35.430828 2521 log.go:172] (0xc000666be0) (1) Data frame handling\nI0404 00:21:35.430871 2521 log.go:172] (0xc000666be0) (1) Data frame sent\nI0404 00:21:35.430894 2521 log.go:172] (0xc000741a20) (0xc000666be0) Stream removed, broadcasting: 1\nI0404 00:21:35.430923 2521 log.go:172] (0xc000741a20) Go away received\nI0404 00:21:35.431403 2521 log.go:172] (0xc000741a20) (0xc000666be0) Stream removed, broadcasting: 1\nI0404 00:21:35.431432 2521 log.go:172] (0xc000741a20) (0xc000666c80) Stream removed, broadcasting: 3\nI0404 00:21:35.431449 2521 log.go:172] (0xc000741a20) (0xc000666d20) Stream removed, broadcasting: 5\n" Apr 4 00:21:35.436: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:21:35.436: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:21:35.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4028 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:21:35.641: INFO: stderr: "I0404 00:21:35.568887 2544 log.go:172] (0xc000ba9290) (0xc000b6a3c0) Create stream\nI0404 00:21:35.568964 2544 log.go:172] (0xc000ba9290) (0xc000b6a3c0) Stream added, broadcasting: 1\nI0404 00:21:35.572584 2544 log.go:172] (0xc000ba9290) Reply frame received for 1\nI0404 00:21:35.572640 2544 log.go:172] (0xc000ba9290) (0xc000b6a460) Create stream\nI0404 00:21:35.572715 2544 log.go:172] (0xc000ba9290) (0xc000b6a460) Stream added, broadcasting: 3\nI0404 00:21:35.573917 2544 log.go:172] (0xc000ba9290) Reply frame received for 3\nI0404 00:21:35.573956 2544 log.go:172] (0xc000ba9290) (0xc000c8c140) Create stream\nI0404 00:21:35.573973 2544 log.go:172] (0xc000ba9290) (0xc000c8c140) Stream added, broadcasting: 5\nI0404 00:21:35.574751 2544 log.go:172] (0xc000ba9290) Reply frame received for 5\nI0404 00:21:35.633370 2544 log.go:172] (0xc000ba9290) Data frame received for 3\nI0404 00:21:35.633436 2544 log.go:172] (0xc000b6a460) (3) Data frame handling\nI0404 00:21:35.633465 2544 log.go:172] (0xc000b6a460) (3) Data frame sent\nI0404 00:21:35.633485 2544 log.go:172] (0xc000ba9290) Data frame received for 3\nI0404 00:21:35.633504 2544 log.go:172] (0xc000b6a460) (3) Data frame handling\nI0404 00:21:35.633581 2544 log.go:172] (0xc000ba9290) Data frame received for 5\nI0404 00:21:35.633615 2544 log.go:172] (0xc000c8c140) (5) Data frame handling\nI0404 00:21:35.633635 2544 log.go:172] (0xc000c8c140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0404 00:21:35.633649 2544 log.go:172] (0xc000ba9290) Data frame received for 5\nI0404 00:21:35.633700 2544 log.go:172] (0xc000c8c140) (5) Data frame handling\nI0404 00:21:35.635635 2544 log.go:172] (0xc000ba9290) Data frame received for 1\nI0404 00:21:35.635658 2544 log.go:172] (0xc000b6a3c0) (1) Data frame handling\nI0404 00:21:35.635681 2544 log.go:172] (0xc000b6a3c0) (1) Data frame sent\nI0404 00:21:35.635707 2544 log.go:172] (0xc000ba9290) (0xc000b6a3c0) Stream removed, broadcasting: 1\nI0404 00:21:35.635797 2544 log.go:172] (0xc000ba9290) Go away received\nI0404 00:21:35.636119 2544 log.go:172] (0xc000ba9290) (0xc000b6a3c0) Stream removed, broadcasting: 1\nI0404 00:21:35.636144 2544 log.go:172] (0xc000ba9290) (0xc000b6a460) Stream removed, broadcasting: 3\nI0404 00:21:35.636163 2544 log.go:172] (0xc000ba9290) (0xc000c8c140) Stream removed, broadcasting: 5\n" Apr 4 00:21:35.641: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:21:35.641: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:21:35.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 00:21:35.850: INFO: stderr: "I0404 00:21:35.767524 2564 log.go:172] (0xc000ace6e0) (0xc000b30000) Create stream\nI0404 00:21:35.767586 2564 log.go:172] (0xc000ace6e0) (0xc000b30000) Stream added, broadcasting: 1\nI0404 00:21:35.770610 2564 log.go:172] (0xc000ace6e0) Reply frame received for 1\nI0404 00:21:35.770643 2564 log.go:172] (0xc000ace6e0) (0xc000b300a0) Create stream\nI0404 00:21:35.770650 2564 log.go:172] (0xc000ace6e0) (0xc000b300a0) Stream added, broadcasting: 3\nI0404 00:21:35.771748 2564 log.go:172] (0xc000ace6e0) Reply frame received for 3\nI0404 00:21:35.771775 2564 log.go:172] (0xc000ace6e0) (0xc000906000) Create stream\nI0404 00:21:35.771781 2564 log.go:172] (0xc000ace6e0) (0xc000906000) Stream added, broadcasting: 5\nI0404 00:21:35.772667 2564 log.go:172] (0xc000ace6e0) Reply frame received for 5\nI0404 00:21:35.843046 2564 log.go:172] (0xc000ace6e0) Data frame received for 5\nI0404 00:21:35.843086 2564 log.go:172] (0xc000906000) (5) Data frame handling\nI0404 00:21:35.843098 2564 log.go:172] (0xc000906000) (5) Data frame sent\nI0404 00:21:35.843107 2564 log.go:172] (0xc000ace6e0) Data frame received for 5\nI0404 00:21:35.843114 2564 log.go:172] (0xc000906000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0404 00:21:35.843137 2564 log.go:172] (0xc000ace6e0) Data frame received for 3\nI0404 00:21:35.843147 2564 log.go:172] (0xc000b300a0) (3) Data frame handling\nI0404 00:21:35.843155 2564 log.go:172] (0xc000b300a0) (3) Data frame sent\nI0404 00:21:35.843163 2564 log.go:172] (0xc000ace6e0) Data frame received for 3\nI0404 00:21:35.843171 2564 log.go:172] (0xc000b300a0) (3) Data frame handling\nI0404 00:21:35.844742 2564 log.go:172] (0xc000ace6e0) Data frame received for 1\nI0404 00:21:35.844783 2564 log.go:172] (0xc000b30000) (1) Data frame handling\nI0404 00:21:35.844813 2564 log.go:172] (0xc000b30000) (1) Data frame sent\nI0404 00:21:35.844845 2564 log.go:172] (0xc000ace6e0) (0xc000b30000) Stream removed, broadcasting: 1\nI0404 00:21:35.844867 2564 log.go:172] (0xc000ace6e0) Go away received\nI0404 00:21:35.845525 2564 log.go:172] (0xc000ace6e0) (0xc000b30000) Stream removed, broadcasting: 1\nI0404 00:21:35.845569 2564 log.go:172] (0xc000ace6e0) (0xc000b300a0) Stream removed, broadcasting: 3\nI0404 00:21:35.845589 2564 log.go:172] (0xc000ace6e0) (0xc000906000) Stream removed, broadcasting: 5\n" Apr 4 00:21:35.850: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 00:21:35.850: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 00:21:35.868: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:21:35.868: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 00:21:35.868: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 4 00:21:35.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4028 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:21:36.051: INFO: stderr: "I0404 00:21:35.982812 2587 log.go:172] (0xc00056cf20) (0xc00054e5a0) Create stream\nI0404 00:21:35.982896 2587 log.go:172] (0xc00056cf20) (0xc00054e5a0) Stream added, broadcasting: 1\nI0404 00:21:35.987801 2587 log.go:172] (0xc00056cf20) Reply frame received for 1\nI0404 00:21:35.987838 2587 log.go:172] (0xc00056cf20) (0xc000679680) Create stream\nI0404 00:21:35.987850 2587 log.go:172] (0xc00056cf20) (0xc000679680) Stream added, broadcasting: 3\nI0404 00:21:35.988858 2587 log.go:172] (0xc00056cf20) Reply frame received for 3\nI0404 00:21:35.988908 2587 log.go:172] (0xc00056cf20) (0xc00052eaa0) Create stream\nI0404 00:21:35.988923 2587 log.go:172] (0xc00056cf20) (0xc00052eaa0) Stream added, broadcasting: 5\nI0404 00:21:35.990161 2587 log.go:172] (0xc00056cf20) Reply frame received for 5\nI0404 00:21:36.044202 2587 log.go:172] (0xc00056cf20) Data frame received for 5\nI0404 00:21:36.044254 2587 log.go:172] (0xc00052eaa0) (5) Data frame handling\nI0404 00:21:36.044274 2587 log.go:172] (0xc00052eaa0) (5) Data frame sent\nI0404 00:21:36.044285 2587 log.go:172] (0xc00056cf20) Data frame received for 5\nI0404 00:21:36.044294 2587 log.go:172] (0xc00052eaa0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:21:36.044329 2587 log.go:172] (0xc00056cf20) Data frame received for 3\nI0404 00:21:36.044359 2587 log.go:172] (0xc000679680) (3) Data frame handling\nI0404 00:21:36.044375 2587 log.go:172] (0xc000679680) (3) Data frame sent\nI0404 00:21:36.044386 2587 log.go:172] (0xc00056cf20) Data frame received for 3\nI0404 00:21:36.044395 2587 log.go:172] (0xc000679680) (3) Data frame handling\nI0404 00:21:36.045879 2587 log.go:172] (0xc00056cf20) Data frame received for 1\nI0404 00:21:36.045894 2587 log.go:172] (0xc00054e5a0) (1) Data frame handling\nI0404 00:21:36.045901 2587 log.go:172] (0xc00054e5a0) (1) Data frame sent\nI0404 00:21:36.045910 2587 log.go:172] (0xc00056cf20) (0xc00054e5a0) Stream removed, broadcasting: 1\nI0404 00:21:36.045923 2587 log.go:172] (0xc00056cf20) Go away received\nI0404 00:21:36.046370 2587 log.go:172] (0xc00056cf20) (0xc00054e5a0) Stream removed, broadcasting: 1\nI0404 00:21:36.046393 2587 log.go:172] (0xc00056cf20) (0xc000679680) Stream removed, broadcasting: 3\nI0404 00:21:36.046410 2587 log.go:172] (0xc00056cf20) (0xc00052eaa0) Stream removed, broadcasting: 5\n" Apr 4 00:21:36.051: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:21:36.051: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:21:36.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4028 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:21:36.261: INFO: stderr: "I0404 00:21:36.165005 2608 log.go:172] (0xc00094e630) (0xc0007c61e0) Create stream\nI0404 00:21:36.165049 2608 log.go:172] (0xc00094e630) (0xc0007c61e0) Stream added, broadcasting: 1\nI0404 00:21:36.167776 2608 log.go:172] (0xc00094e630) Reply frame received for 1\nI0404 00:21:36.167825 2608 log.go:172] (0xc00094e630) (0xc0007ae0a0) Create stream\nI0404 00:21:36.167844 2608 log.go:172] (0xc00094e630) (0xc0007ae0a0) Stream added, broadcasting: 3\nI0404 00:21:36.168988 2608 log.go:172] (0xc00094e630) Reply frame received for 3\nI0404 00:21:36.169025 2608 log.go:172] (0xc00094e630) (0xc000543f40) Create stream\nI0404 00:21:36.169044 2608 log.go:172] (0xc00094e630) (0xc000543f40) Stream added, broadcasting: 5\nI0404 00:21:36.170264 2608 log.go:172] (0xc00094e630) Reply frame received for 5\nI0404 00:21:36.226548 2608 log.go:172] (0xc00094e630) Data frame received for 5\nI0404 00:21:36.226575 2608 log.go:172] (0xc000543f40) (5) Data frame handling\nI0404 00:21:36.226595 2608 log.go:172] (0xc000543f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:21:36.256267 2608 log.go:172] (0xc00094e630) Data frame received for 3\nI0404 00:21:36.256306 2608 log.go:172] (0xc0007ae0a0) (3) Data frame handling\nI0404 00:21:36.256322 2608 log.go:172] (0xc0007ae0a0) (3) Data frame sent\nI0404 00:21:36.256336 2608 log.go:172] (0xc00094e630) Data frame received for 3\nI0404 00:21:36.256347 2608 log.go:172] (0xc0007ae0a0) (3) Data frame handling\nI0404 00:21:36.256400 2608 log.go:172] (0xc00094e630) Data frame received for 5\nI0404 00:21:36.256432 2608 log.go:172] (0xc000543f40) (5) Data frame handling\nI0404 00:21:36.257900 2608 log.go:172] (0xc00094e630) Data frame received for 1\nI0404 00:21:36.257916 2608 log.go:172] (0xc0007c61e0) (1) Data frame handling\nI0404 00:21:36.257932 2608 log.go:172] (0xc0007c61e0) (1) Data frame sent\nI0404 00:21:36.257944 2608 log.go:172] (0xc00094e630) (0xc0007c61e0) Stream removed, broadcasting: 1\nI0404 00:21:36.257961 2608 log.go:172] (0xc00094e630) Go away received\nI0404 00:21:36.258213 2608 log.go:172] (0xc00094e630) (0xc0007c61e0) Stream removed, broadcasting: 1\nI0404 00:21:36.258227 2608 log.go:172] (0xc00094e630) (0xc0007ae0a0) Stream removed, broadcasting: 3\nI0404 00:21:36.258234 2608 log.go:172] (0xc00094e630) (0xc000543f40) Stream removed, broadcasting: 5\n" Apr 4 00:21:36.261: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:21:36.261: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:21:36.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4028 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 00:21:36.486: INFO: stderr: "I0404 00:21:36.385010 2631 log.go:172] (0xc000988000) (0xc00041ebe0) Create stream\nI0404 00:21:36.385074 2631 log.go:172] (0xc000988000) (0xc00041ebe0) Stream added, broadcasting: 1\nI0404 00:21:36.388652 2631 log.go:172] (0xc000988000) Reply frame received for 1\nI0404 00:21:36.388696 2631 log.go:172] (0xc000988000) (0xc00099e000) Create stream\nI0404 00:21:36.388707 2631 log.go:172] (0xc000988000) (0xc00099e000) Stream added, broadcasting: 3\nI0404 00:21:36.390013 2631 log.go:172] (0xc000988000) Reply frame received for 3\nI0404 00:21:36.390055 2631 log.go:172] (0xc000988000) (0xc0009b2000) Create stream\nI0404 00:21:36.390067 2631 log.go:172] (0xc000988000) (0xc0009b2000) Stream added, broadcasting: 5\nI0404 00:21:36.391038 2631 log.go:172] (0xc000988000) Reply frame received for 5\nI0404 00:21:36.449811 2631 log.go:172] (0xc000988000) Data frame received for 5\nI0404 00:21:36.449843 2631 log.go:172] (0xc0009b2000) (5) Data frame handling\nI0404 00:21:36.449862 2631 log.go:172] (0xc0009b2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 00:21:36.478913 2631 log.go:172] (0xc000988000) Data frame received for 3\nI0404 00:21:36.478955 2631 log.go:172] (0xc00099e000) (3) Data frame handling\nI0404 00:21:36.478983 2631 log.go:172] (0xc00099e000) (3) Data frame sent\nI0404 00:21:36.479284 2631 log.go:172] (0xc000988000) Data frame received for 5\nI0404 00:21:36.479311 2631 log.go:172] (0xc000988000) Data frame received for 3\nI0404 00:21:36.479340 2631 log.go:172] (0xc00099e000) (3) Data frame handling\nI0404 00:21:36.479362 2631 log.go:172] (0xc0009b2000) (5) Data frame handling\nI0404 00:21:36.481248 2631 log.go:172] (0xc000988000) Data frame received for 1\nI0404 00:21:36.481308 2631 log.go:172] (0xc00041ebe0) (1) Data frame handling\nI0404 00:21:36.481353 2631 log.go:172] (0xc00041ebe0) (1) Data frame sent\nI0404 00:21:36.481387 2631 log.go:172] (0xc000988000) (0xc00041ebe0) Stream removed, broadcasting: 1\nI0404 00:21:36.481413 2631 log.go:172] (0xc000988000) Go away received\nI0404 00:21:36.481889 2631 log.go:172] (0xc000988000) (0xc00041ebe0) Stream removed, broadcasting: 1\nI0404 00:21:36.481912 2631 log.go:172] (0xc000988000) (0xc00099e000) Stream removed, broadcasting: 3\nI0404 00:21:36.481924 2631 log.go:172] (0xc000988000) (0xc0009b2000) Stream removed, broadcasting: 5\n" Apr 4 00:21:36.486: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 00:21:36.486: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 00:21:36.486: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:21:36.490: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 4 00:21:46.498: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:21:46.498: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:21:46.498: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 4 00:21:46.511: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:46.511: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:46.511: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:46.511: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:46.511: INFO: Apr 4 00:21:46.511: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 00:21:47.574: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:47.574: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:47.574: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:47.574: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:47.574: INFO: Apr 4 00:21:47.574: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 00:21:48.579: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:48.579: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:48.579: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:48.579: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:48.579: INFO: Apr 4 00:21:48.579: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 00:21:49.584: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:49.584: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:49.584: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:49.584: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:49.584: INFO: Apr 4 00:21:49.584: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 00:21:50.590: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:50.590: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:50.590: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:50.590: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:50.590: INFO: Apr 4 00:21:50.590: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 00:21:51.595: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:51.595: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:51.595: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:51.595: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:51.595: INFO: Apr 4 00:21:51.595: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 00:21:52.600: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 00:21:52.600: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:04 +0000 UTC }] Apr 4 00:21:52.600: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:52.600: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 00:21:25 +0000 UTC }] Apr 4 00:21:52.600: INFO: Apr 4 00:21:52.600: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 00:21:53.687: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.905074846s Apr 4 00:21:54.692: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.817740932s Apr 4 00:21:55.696: INFO: Verifying statefulset ss doesn't scale past 0 for another 813.365619ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4028 Apr 4 00:21:56.700: INFO: Scaling statefulset ss to 0 Apr 4 00:21:56.710: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 00:21:56.712: INFO: Deleting all statefulset in ns statefulset-4028 Apr 4 00:21:56.715: INFO: Scaling statefulset ss to 0 Apr 4 00:21:56.724: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:21:56.726: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:21:56.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4028" for this suite. • [SLOW TEST:51.968 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":175,"skipped":3075,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:21:56.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:21:56.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4133" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":176,"skipped":3086,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:21:56.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:21:56.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-725' Apr 4 00:21:57.275: INFO: stderr: "" Apr 4 00:21:57.275: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 4 00:21:57.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-725' Apr 4 00:21:57.535: INFO: stderr: "" Apr 4 00:21:57.535: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 4 00:21:58.540: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:21:58.540: INFO: Found 0 / 1 Apr 4 00:21:59.541: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:21:59.541: INFO: Found 0 / 1 Apr 4 00:22:00.540: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:22:00.540: INFO: Found 0 / 1 Apr 4 00:22:01.541: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:22:01.541: INFO: Found 1 / 1 Apr 4 00:22:01.541: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 4 00:22:01.544: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 00:22:01.544: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 00:22:01.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-slfz9 --namespace=kubectl-725' Apr 4 00:22:01.647: INFO: stderr: "" Apr 4 00:22:01.647: INFO: stdout: "Name: agnhost-master-slfz9\nNamespace: kubectl-725\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Sat, 04 Apr 2020 00:21:57 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.22\nIPs:\n IP: 10.244.2.22\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://7baa8de07eae66484e8ab1f675669b3d48ce741a13292c4eaff1bb90085a18c0\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 04 Apr 2020 00:21:59 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lk4bh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lk4bh:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lk4bh\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-725/agnhost-master-slfz9 to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" Apr 4 00:22:01.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-725' Apr 4 00:22:01.768: INFO: stderr: "" Apr 4 00:22:01.768: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-725\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-slfz9\n" Apr 4 00:22:01.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-725' Apr 4 00:22:01.896: INFO: stderr: "" Apr 4 00:22:01.896: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-725\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.165.80\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.22:6379\nSession Affinity: None\nEvents: \n" Apr 4 00:22:01.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 4 00:22:02.056: INFO: stderr: "" Apr 4 00:22:02.056: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sat, 04 Apr 2020 00:21:58 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 04 Apr 2020 00:20:11 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 04 Apr 2020 00:20:11 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 04 Apr 2020 00:20:11 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 04 Apr 2020 00:20:11 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 4 00:22:02.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-725' Apr 4 00:22:02.209: INFO: stderr: "" Apr 4 00:22:02.209: INFO: stdout: "Name: kubectl-725\nLabels: e2e-framework=kubectl\n e2e-run=cad17071-4b0c-4581-83f9-d423cc9db14b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:22:02.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-725" for this suite. • [SLOW TEST:5.351 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":177,"skipped":3091,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:22:02.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-2e270016-1859-4ed3-bf8d-fc47e8884c4c STEP: Creating a pod to test consume secrets Apr 4 00:22:02.280: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90" in namespace "projected-5866" to be "Succeeded or Failed" Apr 4 00:22:02.284: INFO: Pod "pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521863ms Apr 4 00:22:04.287: INFO: Pod "pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007689762s Apr 4 00:22:06.291: INFO: Pod "pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011192146s STEP: Saw pod success Apr 4 00:22:06.291: INFO: Pod "pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90" satisfied condition "Succeeded or Failed" Apr 4 00:22:06.293: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90 container projected-secret-volume-test: STEP: delete the pod Apr 4 00:22:06.360: INFO: Waiting for pod pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90 to disappear Apr 4 00:22:06.367: INFO: Pod pod-projected-secrets-a6827375-2faa-410b-b3db-a1cd95c6eb90 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:22:06.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5866" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3100,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:22:06.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b79ac75c-da15-4ec0-9b51-a7f7d1e66981 STEP: Creating a pod to test consume secrets Apr 4 00:22:06.429: INFO: Waiting up to 5m0s for pod "pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894" in namespace "secrets-7407" to be "Succeeded or Failed" Apr 4 00:22:06.442: INFO: Pod "pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894": Phase="Pending", Reason="", readiness=false. Elapsed: 12.254511ms Apr 4 00:22:08.446: INFO: Pod "pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016190028s Apr 4 00:22:10.450: INFO: Pod "pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020420752s STEP: Saw pod success Apr 4 00:22:10.450: INFO: Pod "pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894" satisfied condition "Succeeded or Failed" Apr 4 00:22:10.453: INFO: Trying to get logs from node latest-worker pod pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894 container secret-volume-test: STEP: delete the pod Apr 4 00:22:10.482: INFO: Waiting for pod pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894 to disappear Apr 4 00:22:10.487: INFO: Pod pod-secrets-e4f5c01b-64fc-4185-805f-3671c18dd894 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:22:10.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7407" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3113,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:22:10.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:22:10.575: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 4 00:22:10.593: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 4 00:22:15.598: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 00:22:15.598: INFO: Creating deployment "test-rolling-update-deployment" Apr 4 00:22:15.612: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 4 00:22:15.622: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 4 00:22:17.630: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 4 00:22:17.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556535, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556535, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556535, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556535, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:22:19.637: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 4 00:22:19.684: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-346 /apis/apps/v1/namespaces/deployment-346/deployments/test-rolling-update-deployment f5226434-8afc-48b6-bbdd-5a9e0c4edd8b 5206065 1 2020-04-04 00:22:15 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e7a418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-04 00:22:15 +0000 UTC,LastTransitionTime:2020-04-04 00:22:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-04 00:22:18 +0000 UTC,LastTransitionTime:2020-04-04 00:22:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 4 00:22:19.687: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-346 /apis/apps/v1/namespaces/deployment-346/replicasets/test-rolling-update-deployment-664dd8fc7f 7c7959bb-0956-4ba5-879e-b4391fb6d1a4 5206054 1 2020-04-04 00:22:15 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f5226434-8afc-48b6-bbdd-5a9e0c4edd8b 0xc0039046f7 0xc0039046f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003904768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:22:19.687: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 4 00:22:19.687: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-346 /apis/apps/v1/namespaces/deployment-346/replicasets/test-rolling-update-controller ab96a579-4d3c-4d5f-babf-38b217548ac5 5206063 2 2020-04-04 00:22:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f5226434-8afc-48b6-bbdd-5a9e0c4edd8b 0xc003904627 0xc003904628}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003904688 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:22:19.689: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-4z2d8" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-4z2d8 test-rolling-update-deployment-664dd8fc7f- deployment-346 /api/v1/namespaces/deployment-346/pods/test-rolling-update-deployment-664dd8fc7f-4z2d8 b8be5aad-c69b-4496-ba43-5c8baa887e71 5206053 0 2020-04-04 00:22:15 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 7c7959bb-0956-4ba5-879e-b4391fb6d1a4 0xc003904c37 0xc003904c38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blkgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blkgw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blkgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.77,StartTime:2020-04-04 00:22:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:22:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://11ce4b6d4d10e0951d89538328820a60a56df4cede86c0eb6dae2d47d2645471,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:22:19.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-346" for this suite. • [SLOW TEST:9.201 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":180,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:22:19.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:22:19.763: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 4 00:22:24.767: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 00:22:24.767: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 4 00:22:26.771: INFO: Creating deployment "test-rollover-deployment" Apr 4 00:22:26.781: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 4 00:22:28.787: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 4 00:22:28.794: INFO: Ensure that both replica sets have 1 created replica Apr 4 00:22:28.800: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 4 00:22:28.806: INFO: Updating deployment test-rollover-deployment Apr 4 00:22:28.806: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 4 00:22:30.815: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 4 00:22:30.821: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 4 00:22:30.826: INFO: all replica sets need to contain the pod-template-hash label Apr 4 00:22:30.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556549, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:22:32.833: INFO: all replica sets need to contain the pod-template-hash label Apr 4 00:22:32.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556551, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:22:34.834: INFO: all replica sets need to contain the pod-template-hash label Apr 4 00:22:34.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556551, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:22:36.834: INFO: all replica sets need to contain the pod-template-hash label Apr 4 00:22:36.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556551, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:22:38.834: INFO: all replica sets need to contain the pod-template-hash label Apr 4 00:22:38.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556551, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:22:40.834: INFO: all replica sets need to contain the pod-template-hash label Apr 4 00:22:40.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556551, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556546, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:22:42.834: INFO: Apr 4 00:22:42.834: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 4 00:22:42.841: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6698 /apis/apps/v1/namespaces/deployment-6698/deployments/test-rollover-deployment f543d8b8-0fa8-4b59-a935-d10d642b3984 5206237 2 2020-04-04 00:22:26 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b262d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-04 00:22:26 +0000 UTC,LastTransitionTime:2020-04-04 00:22:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-04 00:22:41 +0000 UTC,LastTransitionTime:2020-04-04 00:22:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 4 00:22:42.844: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-6698 /apis/apps/v1/namespaces/deployment-6698/replicasets/test-rollover-deployment-78df7bc796 65031bfb-589d-4d7e-bb84-6755369f5ac3 5206226 2 2020-04-04 00:22:28 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment f543d8b8-0fa8-4b59-a935-d10d642b3984 0xc003b267d7 0xc003b267d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b26848 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:22:42.844: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 4 00:22:42.844: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6698 /apis/apps/v1/namespaces/deployment-6698/replicasets/test-rollover-controller 899bee36-f4c5-4976-8c71-bf4f10a75a3e 5206235 2 2020-04-04 00:22:19 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment f543d8b8-0fa8-4b59-a935-d10d642b3984 0xc003b266ef 0xc003b26700}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003b26768 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:22:42.844: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6698 /apis/apps/v1/namespaces/deployment-6698/replicasets/test-rollover-deployment-f6c94f66c 50258095-c218-4d59-ba7d-48db1cde7f8d 5206177 2 2020-04-04 00:22:26 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment f543d8b8-0fa8-4b59-a935-d10d642b3984 0xc003b268b0 0xc003b268b1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b26928 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:22:42.847: INFO: Pod "test-rollover-deployment-78df7bc796-r84qx" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-r84qx test-rollover-deployment-78df7bc796- deployment-6698 /api/v1/namespaces/deployment-6698/pods/test-rollover-deployment-78df7bc796-r84qx 834561b7-f697-449f-b7d8-f9fdd888e155 5206194 0 2020-04-04 00:22:28 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 65031bfb-589d-4d7e-bb84-6755369f5ac3 0xc003c714b7 0xc003c714b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w95m6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w95m6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w95m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:22:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.25,StartTime:2020-04-04 00:22:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:22:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://320271a4a00156e780eee4f8ab06d442aa842494765d2cea76711fd14c017eec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:22:42.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6698" for this suite. • [SLOW TEST:23.158 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":181,"skipped":3152,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:22:42.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:23:14.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9138" for this suite. STEP: Destroying namespace "nsdeletetest-3241" for this suite. Apr 4 00:23:14.137: INFO: Namespace nsdeletetest-3241 was already deleted STEP: Destroying namespace "nsdeletetest-6093" for this suite. • [SLOW TEST:31.294 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":182,"skipped":3156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:23:14.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:23:14.992: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:23:17.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556595, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556595, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556595, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556594, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:23:20.054: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 4 00:23:20.074: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:23:20.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9291" for this suite. STEP: Destroying namespace "webhook-9291-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.047 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":183,"skipped":3203,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:23:20.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8456 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 00:23:20.269: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 00:23:20.309: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:23:22.377: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:23:24.313: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:23:26.313: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:23:28.323: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:23:30.348: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:23:32.313: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:23:34.314: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:23:36.313: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:23:38.314: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 00:23:38.320: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 4 00:23:40.324: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 00:23:44.349: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:8080/dial?request=hostname&protocol=http&host=10.244.2.27&port=8080&tries=1'] Namespace:pod-network-test-8456 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:23:44.349: INFO: >>> kubeConfig: /root/.kube/config I0404 00:23:44.383620 7 log.go:172] (0xc00256a630) (0xc002550820) Create stream I0404 00:23:44.383649 7 log.go:172] (0xc00256a630) (0xc002550820) Stream added, broadcasting: 1 I0404 00:23:44.385946 7 log.go:172] (0xc00256a630) Reply frame received for 1 I0404 00:23:44.386004 7 log.go:172] (0xc00256a630) (0xc000b86140) Create stream I0404 00:23:44.386022 7 log.go:172] (0xc00256a630) (0xc000b86140) Stream added, broadcasting: 3 I0404 00:23:44.387088 7 log.go:172] (0xc00256a630) Reply frame received for 3 I0404 00:23:44.387133 7 log.go:172] (0xc00256a630) (0xc0025508c0) Create stream I0404 00:23:44.387150 7 log.go:172] (0xc00256a630) (0xc0025508c0) Stream added, broadcasting: 5 I0404 00:23:44.387949 7 log.go:172] (0xc00256a630) Reply frame received for 5 I0404 00:23:44.477429 7 log.go:172] (0xc00256a630) Data frame received for 3 I0404 00:23:44.477454 7 log.go:172] (0xc000b86140) (3) Data frame handling I0404 00:23:44.477465 7 log.go:172] (0xc000b86140) (3) Data frame sent I0404 00:23:44.478007 7 log.go:172] (0xc00256a630) Data frame received for 3 I0404 00:23:44.478040 7 log.go:172] (0xc000b86140) (3) Data frame handling I0404 00:23:44.478162 7 log.go:172] (0xc00256a630) Data frame received for 5 I0404 00:23:44.478176 7 log.go:172] (0xc0025508c0) (5) Data frame handling I0404 00:23:44.479781 7 log.go:172] (0xc00256a630) Data frame received for 1 I0404 00:23:44.479796 7 log.go:172] (0xc002550820) (1) Data frame handling I0404 00:23:44.479805 7 log.go:172] (0xc002550820) (1) Data frame sent I0404 00:23:44.479891 7 log.go:172] (0xc00256a630) (0xc002550820) Stream removed, broadcasting: 1 I0404 00:23:44.479960 7 log.go:172] (0xc00256a630) (0xc002550820) Stream removed, broadcasting: 1 I0404 00:23:44.479971 7 log.go:172] (0xc00256a630) (0xc000b86140) Stream removed, broadcasting: 3 I0404 00:23:44.480098 7 log.go:172] (0xc00256a630) Go away received I0404 00:23:44.480134 7 log.go:172] (0xc00256a630) (0xc0025508c0) Stream removed, broadcasting: 5 Apr 4 00:23:44.480: INFO: Waiting for responses: map[] Apr 4 00:23:44.483: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:8080/dial?request=hostname&protocol=http&host=10.244.1.80&port=8080&tries=1'] Namespace:pod-network-test-8456 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:23:44.483: INFO: >>> kubeConfig: /root/.kube/config I0404 00:23:44.510349 7 log.go:172] (0xc002d20420) (0xc0018b4e60) Create stream I0404 00:23:44.510380 7 log.go:172] (0xc002d20420) (0xc0018b4e60) Stream added, broadcasting: 1 I0404 00:23:44.512145 7 log.go:172] (0xc002d20420) Reply frame received for 1 I0404 00:23:44.512197 7 log.go:172] (0xc002d20420) (0xc000fa8500) Create stream I0404 00:23:44.512214 7 log.go:172] (0xc002d20420) (0xc000fa8500) Stream added, broadcasting: 3 I0404 00:23:44.513306 7 log.go:172] (0xc002d20420) Reply frame received for 3 I0404 00:23:44.513346 7 log.go:172] (0xc002d20420) (0xc002810320) Create stream I0404 00:23:44.513362 7 log.go:172] (0xc002d20420) (0xc002810320) Stream added, broadcasting: 5 I0404 00:23:44.514284 7 log.go:172] (0xc002d20420) Reply frame received for 5 I0404 00:23:44.571599 7 log.go:172] (0xc002d20420) Data frame received for 3 I0404 00:23:44.571628 7 log.go:172] (0xc000fa8500) (3) Data frame handling I0404 00:23:44.571646 7 log.go:172] (0xc000fa8500) (3) Data frame sent I0404 00:23:44.571969 7 log.go:172] (0xc002d20420) Data frame received for 3 I0404 00:23:44.571989 7 log.go:172] (0xc000fa8500) (3) Data frame handling I0404 00:23:44.572137 7 log.go:172] (0xc002d20420) Data frame received for 5 I0404 00:23:44.572166 7 log.go:172] (0xc002810320) (5) Data frame handling I0404 00:23:44.574159 7 log.go:172] (0xc002d20420) Data frame received for 1 I0404 00:23:44.574179 7 log.go:172] (0xc0018b4e60) (1) Data frame handling I0404 00:23:44.574188 7 log.go:172] (0xc0018b4e60) (1) Data frame sent I0404 00:23:44.574198 7 log.go:172] (0xc002d20420) (0xc0018b4e60) Stream removed, broadcasting: 1 I0404 00:23:44.574207 7 log.go:172] (0xc002d20420) Go away received I0404 00:23:44.574350 7 log.go:172] (0xc002d20420) (0xc0018b4e60) Stream removed, broadcasting: 1 I0404 00:23:44.574381 7 log.go:172] (0xc002d20420) (0xc000fa8500) Stream removed, broadcasting: 3 I0404 00:23:44.574397 7 log.go:172] (0xc002d20420) (0xc002810320) Stream removed, broadcasting: 5 Apr 4 00:23:44.574: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:23:44.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8456" for this suite. • [SLOW TEST:24.386 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:23:44.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:23:44.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af" in namespace "projected-704" to be "Succeeded or Failed" Apr 4 00:23:44.660: INFO: Pod "downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af": Phase="Pending", Reason="", readiness=false. Elapsed: 13.267193ms Apr 4 00:23:46.664: INFO: Pod "downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017584102s Apr 4 00:23:48.677: INFO: Pod "downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030090028s STEP: Saw pod success Apr 4 00:23:48.677: INFO: Pod "downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af" satisfied condition "Succeeded or Failed" Apr 4 00:23:48.680: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af container client-container: STEP: delete the pod Apr 4 00:23:48.726: INFO: Waiting for pod downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af to disappear Apr 4 00:23:48.735: INFO: Pod downwardapi-volume-b19eac43-229d-4935-8e09-3382789215af no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:23:48.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-704" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3256,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:23:48.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 4 00:23:48.810: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8845 /api/v1/namespaces/watch-8845/configmaps/e2e-watch-test-watch-closed 2d2c76cd-b59e-44a3-94ca-a88345a1907d 5206619 0 2020-04-04 00:23:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:23:48.810: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8845 /api/v1/namespaces/watch-8845/configmaps/e2e-watch-test-watch-closed 2d2c76cd-b59e-44a3-94ca-a88345a1907d 5206620 0 2020-04-04 00:23:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 4 00:23:48.821: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8845 /api/v1/namespaces/watch-8845/configmaps/e2e-watch-test-watch-closed 2d2c76cd-b59e-44a3-94ca-a88345a1907d 5206621 0 2020-04-04 00:23:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:23:48.821: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8845 /api/v1/namespaces/watch-8845/configmaps/e2e-watch-test-watch-closed 2d2c76cd-b59e-44a3-94ca-a88345a1907d 5206622 0 2020-04-04 00:23:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:23:48.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8845" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":186,"skipped":3259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:23:48.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Apr 4 00:23:48.888: INFO: Waiting up to 5m0s for pod "var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2" in namespace "var-expansion-1727" to be "Succeeded or Failed" Apr 4 00:23:48.928: INFO: Pod "var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.400746ms Apr 4 00:23:50.932: INFO: Pod "var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044112627s Apr 4 00:23:52.936: INFO: Pod "var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2": Phase="Running", Reason="", readiness=true. Elapsed: 4.047956978s Apr 4 00:23:54.940: INFO: Pod "var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051786972s STEP: Saw pod success Apr 4 00:23:54.940: INFO: Pod "var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2" satisfied condition "Succeeded or Failed" Apr 4 00:23:54.942: INFO: Trying to get logs from node latest-worker2 pod var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2 container dapi-container: STEP: delete the pod Apr 4 00:23:54.958: INFO: Waiting for pod var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2 to disappear Apr 4 00:23:54.968: INFO: Pod var-expansion-87439b27-200a-4e9c-bc9b-a256ca6b26f2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:23:54.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1727" for this suite. • [SLOW TEST:6.145 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:23:54.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9948 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9948 STEP: creating replication controller externalsvc in namespace services-9948 I0404 00:23:55.161104 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9948, replica count: 2 I0404 00:23:58.211773 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:24:01.212016 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 4 00:24:01.258: INFO: Creating new exec pod Apr 4 00:24:05.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9948 execpodpsbwg -- /bin/sh -x -c nslookup nodeport-service' Apr 4 00:24:05.486: INFO: stderr: "I0404 00:24:05.402501 2803 log.go:172] (0xc000990000) (0xc0007de000) Create stream\nI0404 00:24:05.402564 2803 log.go:172] (0xc000990000) (0xc0007de000) Stream added, broadcasting: 1\nI0404 00:24:05.405761 2803 log.go:172] (0xc000990000) Reply frame received for 1\nI0404 00:24:05.405802 2803 log.go:172] (0xc000990000) (0xc0007de0a0) Create stream\nI0404 00:24:05.405817 2803 log.go:172] (0xc000990000) (0xc0007de0a0) Stream added, broadcasting: 3\nI0404 00:24:05.406829 2803 log.go:172] (0xc000990000) Reply frame received for 3\nI0404 00:24:05.406870 2803 log.go:172] (0xc000990000) (0xc0007d2140) Create stream\nI0404 00:24:05.406885 2803 log.go:172] (0xc000990000) (0xc0007d2140) Stream added, broadcasting: 5\nI0404 00:24:05.407797 2803 log.go:172] (0xc000990000) Reply frame received for 5\nI0404 00:24:05.472770 2803 log.go:172] (0xc000990000) Data frame received for 5\nI0404 00:24:05.472809 2803 log.go:172] (0xc0007d2140) (5) Data frame handling\nI0404 00:24:05.472841 2803 log.go:172] (0xc0007d2140) (5) Data frame sent\n+ nslookup nodeport-service\nI0404 00:24:05.477902 2803 log.go:172] (0xc000990000) Data frame received for 3\nI0404 00:24:05.477939 2803 log.go:172] (0xc0007de0a0) (3) Data frame handling\nI0404 00:24:05.477989 2803 log.go:172] (0xc0007de0a0) (3) Data frame sent\nI0404 00:24:05.478983 2803 log.go:172] (0xc000990000) Data frame received for 3\nI0404 00:24:05.479007 2803 log.go:172] (0xc0007de0a0) (3) Data frame handling\nI0404 00:24:05.479027 2803 log.go:172] (0xc0007de0a0) (3) Data frame sent\nI0404 00:24:05.479306 2803 log.go:172] (0xc000990000) Data frame received for 3\nI0404 00:24:05.479324 2803 log.go:172] (0xc0007de0a0) (3) Data frame handling\nI0404 00:24:05.479498 2803 log.go:172] (0xc000990000) Data frame received for 5\nI0404 00:24:05.479513 2803 log.go:172] (0xc0007d2140) (5) Data frame handling\nI0404 00:24:05.480958 2803 log.go:172] (0xc000990000) Data frame received for 1\nI0404 00:24:05.480975 2803 log.go:172] (0xc0007de000) (1) Data frame handling\nI0404 00:24:05.480984 2803 log.go:172] (0xc0007de000) (1) Data frame sent\nI0404 00:24:05.481334 2803 log.go:172] (0xc000990000) (0xc0007de000) Stream removed, broadcasting: 1\nI0404 00:24:05.481749 2803 log.go:172] (0xc000990000) Go away received\nI0404 00:24:05.481824 2803 log.go:172] (0xc000990000) (0xc0007de000) Stream removed, broadcasting: 1\nI0404 00:24:05.481855 2803 log.go:172] (0xc000990000) (0xc0007de0a0) Stream removed, broadcasting: 3\nI0404 00:24:05.481875 2803 log.go:172] (0xc000990000) (0xc0007d2140) Stream removed, broadcasting: 5\n" Apr 4 00:24:05.486: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9948.svc.cluster.local\tcanonical name = externalsvc.services-9948.svc.cluster.local.\nName:\texternalsvc.services-9948.svc.cluster.local\nAddress: 10.96.181.6\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9948, will wait for the garbage collector to delete the pods Apr 4 00:24:05.546: INFO: Deleting ReplicationController externalsvc took: 6.575076ms Apr 4 00:24:05.947: INFO: Terminating ReplicationController externalsvc pods took: 400.278864ms Apr 4 00:24:13.071: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:24:13.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9948" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:18.123 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":188,"skipped":3360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:24:13.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4327 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4327 STEP: Creating statefulset with conflicting port in namespace statefulset-4327 STEP: Waiting until pod test-pod will start running in namespace statefulset-4327 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4327 Apr 4 00:24:17.232: INFO: Observed stateful pod in namespace: statefulset-4327, name: ss-0, uid: 49c393d2-bef1-45cc-8805-44279d090227, status phase: Pending. Waiting for statefulset controller to delete. Apr 4 00:24:22.967: INFO: Observed stateful pod in namespace: statefulset-4327, name: ss-0, uid: 49c393d2-bef1-45cc-8805-44279d090227, status phase: Failed. Waiting for statefulset controller to delete. Apr 4 00:24:22.976: INFO: Observed stateful pod in namespace: statefulset-4327, name: ss-0, uid: 49c393d2-bef1-45cc-8805-44279d090227, status phase: Failed. Waiting for statefulset controller to delete. Apr 4 00:24:23.000: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4327 STEP: Removing pod with conflicting port in namespace statefulset-4327 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4327 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 00:24:27.122: INFO: Deleting all statefulset in ns statefulset-4327 Apr 4 00:24:27.124: INFO: Scaling statefulset ss to 0 Apr 4 00:24:37.153: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:24:37.156: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:24:37.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4327" for this suite. • [SLOW TEST:24.078 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":189,"skipped":3392,"failed":0} S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:24:37.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 4 00:24:37.265: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:24:53.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-585" for this suite. • [SLOW TEST:15.834 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:24:53.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:24:53.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:24:55.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556693, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556693, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556693, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556693, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:24:58.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:24:58.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6976" for this suite. STEP: Destroying namespace "webhook-6976-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.716 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":191,"skipped":3425,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:24:58.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 4 00:24:58.793: INFO: Waiting up to 5m0s for pod "pod-75c67d21-d146-433f-b298-5fac1efa3abf" in namespace "emptydir-9293" to be "Succeeded or Failed" Apr 4 00:24:58.812: INFO: Pod "pod-75c67d21-d146-433f-b298-5fac1efa3abf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.08623ms Apr 4 00:25:00.816: INFO: Pod "pod-75c67d21-d146-433f-b298-5fac1efa3abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023008743s Apr 4 00:25:02.819: INFO: Pod "pod-75c67d21-d146-433f-b298-5fac1efa3abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02580202s STEP: Saw pod success Apr 4 00:25:02.819: INFO: Pod "pod-75c67d21-d146-433f-b298-5fac1efa3abf" satisfied condition "Succeeded or Failed" Apr 4 00:25:02.821: INFO: Trying to get logs from node latest-worker pod pod-75c67d21-d146-433f-b298-5fac1efa3abf container test-container: STEP: delete the pod Apr 4 00:25:02.938: INFO: Waiting for pod pod-75c67d21-d146-433f-b298-5fac1efa3abf to disappear Apr 4 00:25:02.946: INFO: Pod pod-75c67d21-d146-433f-b298-5fac1efa3abf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:25:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9293" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:25:02.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:25:03.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0" in namespace "projected-4994" to be "Succeeded or Failed" Apr 4 00:25:03.079: INFO: Pod "downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939709ms Apr 4 00:25:05.083: INFO: Pod "downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006982344s Apr 4 00:25:07.088: INFO: Pod "downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011410067s STEP: Saw pod success Apr 4 00:25:07.088: INFO: Pod "downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0" satisfied condition "Succeeded or Failed" Apr 4 00:25:07.091: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0 container client-container: STEP: delete the pod Apr 4 00:25:07.124: INFO: Waiting for pod downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0 to disappear Apr 4 00:25:07.150: INFO: Pod downwardapi-volume-cdfa30c8-03f8-42c1-a058-9b2756c305e0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:25:07.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4994" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:25:07.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-5882 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 00:25:07.197: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 00:25:07.295: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:25:09.299: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:25:11.301: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:25:13.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:25:15.300: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:25:17.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:25:19.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:25:21.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:25:23.299: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:25:25.299: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 00:25:25.305: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 00:25:29.359: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.33 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5882 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:25:29.359: INFO: >>> kubeConfig: /root/.kube/config I0404 00:25:29.398330 7 log.go:172] (0xc002d20370) (0xc0013d0320) Create stream I0404 00:25:29.398378 7 log.go:172] (0xc002d20370) (0xc0013d0320) Stream added, broadcasting: 1 I0404 00:25:29.400240 7 log.go:172] (0xc002d20370) Reply frame received for 1 I0404 00:25:29.400277 7 log.go:172] (0xc002d20370) (0xc00237a3c0) Create stream I0404 00:25:29.400298 7 log.go:172] (0xc002d20370) (0xc00237a3c0) Stream added, broadcasting: 3 I0404 00:25:29.401496 7 log.go:172] (0xc002d20370) Reply frame received for 3 I0404 00:25:29.401525 7 log.go:172] (0xc002d20370) (0xc0013d0460) Create stream I0404 00:25:29.401535 7 log.go:172] (0xc002d20370) (0xc0013d0460) Stream added, broadcasting: 5 I0404 00:25:29.402415 7 log.go:172] (0xc002d20370) Reply frame received for 5 I0404 00:25:30.494570 7 log.go:172] (0xc002d20370) Data frame received for 5 I0404 00:25:30.494622 7 log.go:172] (0xc0013d0460) (5) Data frame handling I0404 00:25:30.494648 7 log.go:172] (0xc002d20370) Data frame received for 3 I0404 00:25:30.494661 7 log.go:172] (0xc00237a3c0) (3) Data frame handling I0404 00:25:30.494677 7 log.go:172] (0xc00237a3c0) (3) Data frame sent I0404 00:25:30.494698 7 log.go:172] (0xc002d20370) Data frame received for 3 I0404 00:25:30.494711 7 log.go:172] (0xc00237a3c0) (3) Data frame handling I0404 00:25:30.496631 7 log.go:172] (0xc002d20370) Data frame received for 1 I0404 00:25:30.496662 7 log.go:172] (0xc0013d0320) (1) Data frame handling I0404 00:25:30.496677 7 log.go:172] (0xc0013d0320) (1) Data frame sent I0404 00:25:30.496704 7 log.go:172] (0xc002d20370) (0xc0013d0320) Stream removed, broadcasting: 1 I0404 00:25:30.496737 7 log.go:172] (0xc002d20370) Go away received I0404 00:25:30.496864 7 log.go:172] (0xc002d20370) (0xc0013d0320) Stream removed, broadcasting: 1 I0404 00:25:30.496900 7 log.go:172] (0xc002d20370) (0xc00237a3c0) Stream removed, broadcasting: 3 I0404 00:25:30.496918 7 log.go:172] (0xc002d20370) (0xc0013d0460) Stream removed, broadcasting: 5 Apr 4 00:25:30.496: INFO: Found all expected endpoints: [netserver-0] Apr 4 00:25:30.511: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.88 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5882 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:25:30.511: INFO: >>> kubeConfig: /root/.kube/config I0404 00:25:30.549496 7 log.go:172] (0xc002af6630) (0xc001e885a0) Create stream I0404 00:25:30.549519 7 log.go:172] (0xc002af6630) (0xc001e885a0) Stream added, broadcasting: 1 I0404 00:25:30.551501 7 log.go:172] (0xc002af6630) Reply frame received for 1 I0404 00:25:30.551528 7 log.go:172] (0xc002af6630) (0xc001e54140) Create stream I0404 00:25:30.551538 7 log.go:172] (0xc002af6630) (0xc001e54140) Stream added, broadcasting: 3 I0404 00:25:30.552565 7 log.go:172] (0xc002af6630) Reply frame received for 3 I0404 00:25:30.552588 7 log.go:172] (0xc002af6630) (0xc001e88780) Create stream I0404 00:25:30.552599 7 log.go:172] (0xc002af6630) (0xc001e88780) Stream added, broadcasting: 5 I0404 00:25:30.554223 7 log.go:172] (0xc002af6630) Reply frame received for 5 I0404 00:25:31.630237 7 log.go:172] (0xc002af6630) Data frame received for 3 I0404 00:25:31.630282 7 log.go:172] (0xc001e54140) (3) Data frame handling I0404 00:25:31.630314 7 log.go:172] (0xc001e54140) (3) Data frame sent I0404 00:25:31.630346 7 log.go:172] (0xc002af6630) Data frame received for 3 I0404 00:25:31.630364 7 log.go:172] (0xc001e54140) (3) Data frame handling I0404 00:25:31.630595 7 log.go:172] (0xc002af6630) Data frame received for 5 I0404 00:25:31.630618 7 log.go:172] (0xc001e88780) (5) Data frame handling I0404 00:25:31.632423 7 log.go:172] (0xc002af6630) Data frame received for 1 I0404 00:25:31.632453 7 log.go:172] (0xc001e885a0) (1) Data frame handling I0404 00:25:31.632473 7 log.go:172] (0xc001e885a0) (1) Data frame sent I0404 00:25:31.632583 7 log.go:172] (0xc002af6630) (0xc001e885a0) Stream removed, broadcasting: 1 I0404 00:25:31.632732 7 log.go:172] (0xc002af6630) (0xc001e885a0) Stream removed, broadcasting: 1 I0404 00:25:31.632822 7 log.go:172] (0xc002af6630) (0xc001e54140) Stream removed, broadcasting: 3 I0404 00:25:31.632856 7 log.go:172] (0xc002af6630) (0xc001e88780) Stream removed, broadcasting: 5 Apr 4 00:25:31.632: INFO: Found all expected endpoints: [netserver-1] I0404 00:25:31.632921 7 log.go:172] (0xc002af6630) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:25:31.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5882" for this suite. • [SLOW TEST:24.482 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3509,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:25:31.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 in namespace container-probe-3536 Apr 4 00:25:35.735: INFO: Started pod liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 in namespace container-probe-3536 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 00:25:35.738: INFO: Initial restart count of pod liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 is 0 Apr 4 00:25:53.821: INFO: Restart count of pod container-probe-3536/liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 is now 1 (18.0826139s elapsed) Apr 4 00:26:13.876: INFO: Restart count of pod container-probe-3536/liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 is now 2 (38.137945998s elapsed) Apr 4 00:26:35.926: INFO: Restart count of pod container-probe-3536/liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 is now 3 (1m0.187353743s elapsed) Apr 4 00:26:53.966: INFO: Restart count of pod container-probe-3536/liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 is now 4 (1m18.228148358s elapsed) Apr 4 00:28:08.114: INFO: Restart count of pod container-probe-3536/liveness-624bdd58-e5f0-49d6-becf-7744ddce10f8 is now 5 (2m32.375205613s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:28:08.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3536" for this suite. • [SLOW TEST:156.493 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3523,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:28:08.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:28:08.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a" in namespace "downward-api-741" to be "Succeeded or Failed" Apr 4 00:28:08.220: INFO: Pod "downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.222879ms Apr 4 00:28:10.224: INFO: Pod "downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006834173s Apr 4 00:28:12.228: INFO: Pod "downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010873862s STEP: Saw pod success Apr 4 00:28:12.228: INFO: Pod "downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a" satisfied condition "Succeeded or Failed" Apr 4 00:28:12.231: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a container client-container: STEP: delete the pod Apr 4 00:28:12.311: INFO: Waiting for pod downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a to disappear Apr 4 00:28:12.322: INFO: Pod downwardapi-volume-d10ab433-f8b2-4e3c-864b-7dc016c22d3a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:28:12.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-741" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:28:12.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 4 00:28:18.909: INFO: Successfully updated pod "adopt-release-bncv2" STEP: Checking that the Job readopts the Pod Apr 4 00:28:18.909: INFO: Waiting up to 15m0s for pod "adopt-release-bncv2" in namespace "job-2984" to be "adopted" Apr 4 00:28:18.929: INFO: Pod "adopt-release-bncv2": Phase="Running", Reason="", readiness=true. Elapsed: 19.962606ms Apr 4 00:28:20.933: INFO: Pod "adopt-release-bncv2": Phase="Running", Reason="", readiness=true. Elapsed: 2.024168943s Apr 4 00:28:20.933: INFO: Pod "adopt-release-bncv2" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 4 00:28:21.441: INFO: Successfully updated pod "adopt-release-bncv2" STEP: Checking that the Job releases the Pod Apr 4 00:28:21.441: INFO: Waiting up to 15m0s for pod "adopt-release-bncv2" in namespace "job-2984" to be "released" Apr 4 00:28:21.455: INFO: Pod "adopt-release-bncv2": Phase="Running", Reason="", readiness=true. Elapsed: 13.054702ms Apr 4 00:28:23.458: INFO: Pod "adopt-release-bncv2": Phase="Running", Reason="", readiness=true. Elapsed: 2.016895523s Apr 4 00:28:23.458: INFO: Pod "adopt-release-bncv2" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:28:23.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2984" for this suite. • [SLOW TEST:11.139 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":197,"skipped":3559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:28:23.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:28:24.096: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:28:26.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556904, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556904, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556904, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556904, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:28:29.166: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:28:29.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8295" for this suite. STEP: Destroying namespace "webhook-8295-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.896 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":198,"skipped":3595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:28:29.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8790 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 00:28:29.459: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 00:28:29.740: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:28:31.744: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:28:33.744: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:28:35.743: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:28:37.744: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:28:39.744: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:28:41.744: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:28:43.744: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:28:45.743: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 00:28:45.749: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 00:28:49.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.39:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:28:49.800: INFO: >>> kubeConfig: /root/.kube/config I0404 00:28:49.847836 7 log.go:172] (0xc002af6bb0) (0xc0025500a0) Create stream I0404 00:28:49.847869 7 log.go:172] (0xc002af6bb0) (0xc0025500a0) Stream added, broadcasting: 1 I0404 00:28:49.850097 7 log.go:172] (0xc002af6bb0) Reply frame received for 1 I0404 00:28:49.850138 7 log.go:172] (0xc002af6bb0) (0xc000fe5400) Create stream I0404 00:28:49.850145 7 log.go:172] (0xc002af6bb0) (0xc000fe5400) Stream added, broadcasting: 3 I0404 00:28:49.851459 7 log.go:172] (0xc002af6bb0) Reply frame received for 3 I0404 00:28:49.851519 7 log.go:172] (0xc002af6bb0) (0xc002550140) Create stream I0404 00:28:49.851537 7 log.go:172] (0xc002af6bb0) (0xc002550140) Stream added, broadcasting: 5 I0404 00:28:49.852307 7 log.go:172] (0xc002af6bb0) Reply frame received for 5 I0404 00:28:49.944920 7 log.go:172] (0xc002af6bb0) Data frame received for 3 I0404 00:28:49.944953 7 log.go:172] (0xc000fe5400) (3) Data frame handling I0404 00:28:49.944980 7 log.go:172] (0xc000fe5400) (3) Data frame sent I0404 00:28:49.944992 7 log.go:172] (0xc002af6bb0) Data frame received for 3 I0404 00:28:49.945006 7 log.go:172] (0xc000fe5400) (3) Data frame handling I0404 00:28:49.945330 7 log.go:172] (0xc002af6bb0) Data frame received for 5 I0404 00:28:49.945394 7 log.go:172] (0xc002550140) (5) Data frame handling I0404 00:28:49.947014 7 log.go:172] (0xc002af6bb0) Data frame received for 1 I0404 00:28:49.947034 7 log.go:172] (0xc0025500a0) (1) Data frame handling I0404 00:28:49.947048 7 log.go:172] (0xc0025500a0) (1) Data frame sent I0404 00:28:49.947066 7 log.go:172] (0xc002af6bb0) (0xc0025500a0) Stream removed, broadcasting: 1 I0404 00:28:49.947085 7 log.go:172] (0xc002af6bb0) Go away received I0404 00:28:49.947151 7 log.go:172] (0xc002af6bb0) (0xc0025500a0) Stream removed, broadcasting: 1 I0404 00:28:49.947164 7 log.go:172] (0xc002af6bb0) (0xc000fe5400) Stream removed, broadcasting: 3 I0404 00:28:49.947170 7 log.go:172] (0xc002af6bb0) (0xc002550140) Stream removed, broadcasting: 5 Apr 4 00:28:49.947: INFO: Found all expected endpoints: [netserver-0] Apr 4 00:28:49.950: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.91:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8790 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:28:49.950: INFO: >>> kubeConfig: /root/.kube/config I0404 00:28:49.978614 7 log.go:172] (0xc004d4a370) (0xc002810aa0) Create stream I0404 00:28:49.978635 7 log.go:172] (0xc004d4a370) (0xc002810aa0) Stream added, broadcasting: 1 I0404 00:28:49.980759 7 log.go:172] (0xc004d4a370) Reply frame received for 1 I0404 00:28:49.980816 7 log.go:172] (0xc004d4a370) (0xc000fe5ae0) Create stream I0404 00:28:49.980826 7 log.go:172] (0xc004d4a370) (0xc000fe5ae0) Stream added, broadcasting: 3 I0404 00:28:49.981786 7 log.go:172] (0xc004d4a370) Reply frame received for 3 I0404 00:28:49.981813 7 log.go:172] (0xc004d4a370) (0xc0013d1ae0) Create stream I0404 00:28:49.981823 7 log.go:172] (0xc004d4a370) (0xc0013d1ae0) Stream added, broadcasting: 5 I0404 00:28:49.982732 7 log.go:172] (0xc004d4a370) Reply frame received for 5 I0404 00:28:50.048061 7 log.go:172] (0xc004d4a370) Data frame received for 3 I0404 00:28:50.048086 7 log.go:172] (0xc000fe5ae0) (3) Data frame handling I0404 00:28:50.048099 7 log.go:172] (0xc000fe5ae0) (3) Data frame sent I0404 00:28:50.048107 7 log.go:172] (0xc004d4a370) Data frame received for 3 I0404 00:28:50.048120 7 log.go:172] (0xc000fe5ae0) (3) Data frame handling I0404 00:28:50.048198 7 log.go:172] (0xc004d4a370) Data frame received for 5 I0404 00:28:50.048215 7 log.go:172] (0xc0013d1ae0) (5) Data frame handling I0404 00:28:50.049681 7 log.go:172] (0xc004d4a370) Data frame received for 1 I0404 00:28:50.049698 7 log.go:172] (0xc002810aa0) (1) Data frame handling I0404 00:28:50.049714 7 log.go:172] (0xc002810aa0) (1) Data frame sent I0404 00:28:50.049736 7 log.go:172] (0xc004d4a370) (0xc002810aa0) Stream removed, broadcasting: 1 I0404 00:28:50.049754 7 log.go:172] (0xc004d4a370) Go away received I0404 00:28:50.049880 7 log.go:172] (0xc004d4a370) (0xc002810aa0) Stream removed, broadcasting: 1 I0404 00:28:50.049898 7 log.go:172] (0xc004d4a370) (0xc000fe5ae0) Stream removed, broadcasting: 3 I0404 00:28:50.049912 7 log.go:172] (0xc004d4a370) (0xc0013d1ae0) Stream removed, broadcasting: 5 Apr 4 00:28:50.049: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:28:50.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8790" for this suite. • [SLOW TEST:20.691 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3619,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:28:50.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 4 00:28:50.115: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Apr 4 00:28:51.151: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 4 00:28:53.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556931, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556931, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556931, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556931, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 00:28:56.008: INFO: Waited 614.166287ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:28:56.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8068" for this suite. • [SLOW TEST:6.777 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":200,"skipped":3632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:28:56.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:28:56.979: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 5.481226ms) Apr 4 00:28:56.982: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.897097ms) Apr 4 00:28:56.985: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.028564ms) Apr 4 00:28:56.988: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.859256ms) Apr 4 00:28:56.991: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.20518ms) Apr 4 00:28:56.994: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.233802ms) Apr 4 00:28:56.998: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.287727ms) Apr 4 00:28:57.001: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.020806ms) Apr 4 00:28:57.004: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.290566ms) Apr 4 00:28:57.007: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.115227ms) Apr 4 00:28:57.010: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.930157ms) Apr 4 00:28:57.013: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.074172ms) Apr 4 00:28:57.016: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.024887ms) Apr 4 00:28:57.020: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.257496ms) Apr 4 00:28:57.023: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.7832ms) Apr 4 00:28:57.025: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.628646ms) Apr 4 00:28:57.028: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.856774ms) Apr 4 00:28:57.031: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.247837ms) Apr 4 00:28:57.035: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.429613ms) Apr 4 00:28:57.038: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.302721ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:28:57.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1782" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":201,"skipped":3691,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:28:57.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 4 00:28:57.188: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:05.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3713" for this suite. • [SLOW TEST:8.718 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":202,"skipped":3694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:05.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-6e142cb4-2814-41de-b67b-56f2ebce8e4d STEP: Creating a pod to test consume configMaps Apr 4 00:29:05.869: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8" in namespace "projected-5729" to be "Succeeded or Failed" Apr 4 00:29:05.894: INFO: Pod "pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.480631ms Apr 4 00:29:07.898: INFO: Pod "pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028651715s Apr 4 00:29:09.902: INFO: Pod "pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032522368s STEP: Saw pod success Apr 4 00:29:09.902: INFO: Pod "pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8" satisfied condition "Succeeded or Failed" Apr 4 00:29:09.905: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8 container projected-configmap-volume-test: STEP: delete the pod Apr 4 00:29:10.096: INFO: Waiting for pod pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8 to disappear Apr 4 00:29:10.130: INFO: Pod pod-projected-configmaps-d4b3be10-0deb-4a24-9747-9f22d342fbe8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:10.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5729" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3786,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:10.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:21.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3982" for this suite. • [SLOW TEST:11.096 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":204,"skipped":3792,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:21.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0404 00:29:22.449596 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 00:29:22.449: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:22.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8904" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":205,"skipped":3796,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:22.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 4 00:29:32.611: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:32.611: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:32.650211 7 log.go:172] (0xc00256abb0) (0xc0018b4f00) Create stream I0404 00:29:32.650245 7 log.go:172] (0xc00256abb0) (0xc0018b4f00) Stream added, broadcasting: 1 I0404 00:29:32.658188 7 log.go:172] (0xc00256abb0) Reply frame received for 1 I0404 00:29:32.658255 7 log.go:172] (0xc00256abb0) (0xc0018b5220) Create stream I0404 00:29:32.658281 7 log.go:172] (0xc00256abb0) (0xc0018b5220) Stream added, broadcasting: 3 I0404 00:29:32.659621 7 log.go:172] (0xc00256abb0) Reply frame received for 3 I0404 00:29:32.659645 7 log.go:172] (0xc00256abb0) (0xc002810aa0) Create stream I0404 00:29:32.659654 7 log.go:172] (0xc00256abb0) (0xc002810aa0) Stream added, broadcasting: 5 I0404 00:29:32.660537 7 log.go:172] (0xc00256abb0) Reply frame received for 5 I0404 00:29:32.738887 7 log.go:172] (0xc00256abb0) Data frame received for 5 I0404 00:29:32.738927 7 log.go:172] (0xc002810aa0) (5) Data frame handling I0404 00:29:32.738958 7 log.go:172] (0xc00256abb0) Data frame received for 3 I0404 00:29:32.738972 7 log.go:172] (0xc0018b5220) (3) Data frame handling I0404 00:29:32.738990 7 log.go:172] (0xc0018b5220) (3) Data frame sent I0404 00:29:32.739006 7 log.go:172] (0xc00256abb0) Data frame received for 3 I0404 00:29:32.739024 7 log.go:172] (0xc0018b5220) (3) Data frame handling I0404 00:29:32.740676 7 log.go:172] (0xc00256abb0) Data frame received for 1 I0404 00:29:32.740700 7 log.go:172] (0xc0018b4f00) (1) Data frame handling I0404 00:29:32.740713 7 log.go:172] (0xc0018b4f00) (1) Data frame sent I0404 00:29:32.740731 7 log.go:172] (0xc00256abb0) (0xc0018b4f00) Stream removed, broadcasting: 1 I0404 00:29:32.740817 7 log.go:172] (0xc00256abb0) Go away received I0404 00:29:32.740850 7 log.go:172] (0xc00256abb0) (0xc0018b4f00) Stream removed, broadcasting: 1 I0404 00:29:32.740873 7 log.go:172] (0xc00256abb0) (0xc0018b5220) Stream removed, broadcasting: 3 I0404 00:29:32.740885 7 log.go:172] (0xc00256abb0) (0xc002810aa0) Stream removed, broadcasting: 5 Apr 4 00:29:32.740: INFO: Exec stderr: "" Apr 4 00:29:32.740: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:32.740: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:32.772069 7 log.go:172] (0xc00277f4a0) (0xc0025503c0) Create stream I0404 00:29:32.772115 7 log.go:172] (0xc00277f4a0) (0xc0025503c0) Stream added, broadcasting: 1 I0404 00:29:32.774673 7 log.go:172] (0xc00277f4a0) Reply frame received for 1 I0404 00:29:32.774713 7 log.go:172] (0xc00277f4a0) (0xc002550460) Create stream I0404 00:29:32.774727 7 log.go:172] (0xc00277f4a0) (0xc002550460) Stream added, broadcasting: 3 I0404 00:29:32.775901 7 log.go:172] (0xc00277f4a0) Reply frame received for 3 I0404 00:29:32.775943 7 log.go:172] (0xc00277f4a0) (0xc001e88140) Create stream I0404 00:29:32.775967 7 log.go:172] (0xc00277f4a0) (0xc001e88140) Stream added, broadcasting: 5 I0404 00:29:32.777022 7 log.go:172] (0xc00277f4a0) Reply frame received for 5 I0404 00:29:32.853204 7 log.go:172] (0xc00277f4a0) Data frame received for 3 I0404 00:29:32.853279 7 log.go:172] (0xc002550460) (3) Data frame handling I0404 00:29:32.853294 7 log.go:172] (0xc002550460) (3) Data frame sent I0404 00:29:32.853305 7 log.go:172] (0xc00277f4a0) Data frame received for 3 I0404 00:29:32.853312 7 log.go:172] (0xc002550460) (3) Data frame handling I0404 00:29:32.853341 7 log.go:172] (0xc00277f4a0) Data frame received for 5 I0404 00:29:32.853354 7 log.go:172] (0xc001e88140) (5) Data frame handling I0404 00:29:32.854387 7 log.go:172] (0xc00277f4a0) Data frame received for 1 I0404 00:29:32.854411 7 log.go:172] (0xc0025503c0) (1) Data frame handling I0404 00:29:32.854424 7 log.go:172] (0xc0025503c0) (1) Data frame sent I0404 00:29:32.854444 7 log.go:172] (0xc00277f4a0) (0xc0025503c0) Stream removed, broadcasting: 1 I0404 00:29:32.854466 7 log.go:172] (0xc00277f4a0) Go away received I0404 00:29:32.854586 7 log.go:172] (0xc00277f4a0) (0xc0025503c0) Stream removed, broadcasting: 1 I0404 00:29:32.854613 7 log.go:172] (0xc00277f4a0) (0xc002550460) Stream removed, broadcasting: 3 I0404 00:29:32.854668 7 log.go:172] (0xc00277f4a0) (0xc001e88140) Stream removed, broadcasting: 5 Apr 4 00:29:32.854: INFO: Exec stderr: "" Apr 4 00:29:32.854: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:32.854: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:32.882215 7 log.go:172] (0xc004d4a580) (0xc0013d1860) Create stream I0404 00:29:32.882244 7 log.go:172] (0xc004d4a580) (0xc0013d1860) Stream added, broadcasting: 1 I0404 00:29:32.884483 7 log.go:172] (0xc004d4a580) Reply frame received for 1 I0404 00:29:32.884535 7 log.go:172] (0xc004d4a580) (0xc001e881e0) Create stream I0404 00:29:32.884558 7 log.go:172] (0xc004d4a580) (0xc001e881e0) Stream added, broadcasting: 3 I0404 00:29:32.885904 7 log.go:172] (0xc004d4a580) Reply frame received for 3 I0404 00:29:32.885932 7 log.go:172] (0xc004d4a580) (0xc002550500) Create stream I0404 00:29:32.885943 7 log.go:172] (0xc004d4a580) (0xc002550500) Stream added, broadcasting: 5 I0404 00:29:32.886940 7 log.go:172] (0xc004d4a580) Reply frame received for 5 I0404 00:29:32.940651 7 log.go:172] (0xc004d4a580) Data frame received for 5 I0404 00:29:32.940692 7 log.go:172] (0xc004d4a580) Data frame received for 3 I0404 00:29:32.940735 7 log.go:172] (0xc001e881e0) (3) Data frame handling I0404 00:29:32.940781 7 log.go:172] (0xc002550500) (5) Data frame handling I0404 00:29:32.940826 7 log.go:172] (0xc001e881e0) (3) Data frame sent I0404 00:29:32.940856 7 log.go:172] (0xc004d4a580) Data frame received for 3 I0404 00:29:32.940878 7 log.go:172] (0xc001e881e0) (3) Data frame handling I0404 00:29:32.942455 7 log.go:172] (0xc004d4a580) Data frame received for 1 I0404 00:29:32.942484 7 log.go:172] (0xc0013d1860) (1) Data frame handling I0404 00:29:32.942503 7 log.go:172] (0xc0013d1860) (1) Data frame sent I0404 00:29:32.942526 7 log.go:172] (0xc004d4a580) (0xc0013d1860) Stream removed, broadcasting: 1 I0404 00:29:32.942547 7 log.go:172] (0xc004d4a580) Go away received I0404 00:29:32.942636 7 log.go:172] (0xc004d4a580) (0xc0013d1860) Stream removed, broadcasting: 1 I0404 00:29:32.942662 7 log.go:172] (0xc004d4a580) (0xc001e881e0) Stream removed, broadcasting: 3 I0404 00:29:32.942674 7 log.go:172] (0xc004d4a580) (0xc002550500) Stream removed, broadcasting: 5 Apr 4 00:29:32.942: INFO: Exec stderr: "" Apr 4 00:29:32.942: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:32.942: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:32.974007 7 log.go:172] (0xc004d4aa50) (0xc0013d1ae0) Create stream I0404 00:29:32.974032 7 log.go:172] (0xc004d4aa50) (0xc0013d1ae0) Stream added, broadcasting: 1 I0404 00:29:32.975851 7 log.go:172] (0xc004d4aa50) Reply frame received for 1 I0404 00:29:32.975885 7 log.go:172] (0xc004d4aa50) (0xc0018b5400) Create stream I0404 00:29:32.975897 7 log.go:172] (0xc004d4aa50) (0xc0018b5400) Stream added, broadcasting: 3 I0404 00:29:32.976757 7 log.go:172] (0xc004d4aa50) Reply frame received for 3 I0404 00:29:32.976798 7 log.go:172] (0xc004d4aa50) (0xc001e88320) Create stream I0404 00:29:32.976814 7 log.go:172] (0xc004d4aa50) (0xc001e88320) Stream added, broadcasting: 5 I0404 00:29:32.977837 7 log.go:172] (0xc004d4aa50) Reply frame received for 5 I0404 00:29:33.028966 7 log.go:172] (0xc004d4aa50) Data frame received for 5 I0404 00:29:33.028990 7 log.go:172] (0xc001e88320) (5) Data frame handling I0404 00:29:33.029035 7 log.go:172] (0xc004d4aa50) Data frame received for 3 I0404 00:29:33.029061 7 log.go:172] (0xc0018b5400) (3) Data frame handling I0404 00:29:33.029091 7 log.go:172] (0xc0018b5400) (3) Data frame sent I0404 00:29:33.029256 7 log.go:172] (0xc004d4aa50) Data frame received for 3 I0404 00:29:33.029286 7 log.go:172] (0xc0018b5400) (3) Data frame handling I0404 00:29:33.030887 7 log.go:172] (0xc004d4aa50) Data frame received for 1 I0404 00:29:33.030930 7 log.go:172] (0xc0013d1ae0) (1) Data frame handling I0404 00:29:33.030960 7 log.go:172] (0xc0013d1ae0) (1) Data frame sent I0404 00:29:33.030987 7 log.go:172] (0xc004d4aa50) (0xc0013d1ae0) Stream removed, broadcasting: 1 I0404 00:29:33.031011 7 log.go:172] (0xc004d4aa50) Go away received I0404 00:29:33.031117 7 log.go:172] (0xc004d4aa50) (0xc0013d1ae0) Stream removed, broadcasting: 1 I0404 00:29:33.031134 7 log.go:172] (0xc004d4aa50) (0xc0018b5400) Stream removed, broadcasting: 3 I0404 00:29:33.031145 7 log.go:172] (0xc004d4aa50) (0xc001e88320) Stream removed, broadcasting: 5 Apr 4 00:29:33.031: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 4 00:29:33.031: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:33.031: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:33.063060 7 log.go:172] (0xc004d4b080) (0xc001a1a000) Create stream I0404 00:29:33.063105 7 log.go:172] (0xc004d4b080) (0xc001a1a000) Stream added, broadcasting: 1 I0404 00:29:33.066090 7 log.go:172] (0xc004d4b080) Reply frame received for 1 I0404 00:29:33.066134 7 log.go:172] (0xc004d4b080) (0xc001a1a1e0) Create stream I0404 00:29:33.066151 7 log.go:172] (0xc004d4b080) (0xc001a1a1e0) Stream added, broadcasting: 3 I0404 00:29:33.067276 7 log.go:172] (0xc004d4b080) Reply frame received for 3 I0404 00:29:33.067309 7 log.go:172] (0xc004d4b080) (0xc0025505a0) Create stream I0404 00:29:33.067321 7 log.go:172] (0xc004d4b080) (0xc0025505a0) Stream added, broadcasting: 5 I0404 00:29:33.068351 7 log.go:172] (0xc004d4b080) Reply frame received for 5 I0404 00:29:33.133350 7 log.go:172] (0xc004d4b080) Data frame received for 5 I0404 00:29:33.133378 7 log.go:172] (0xc0025505a0) (5) Data frame handling I0404 00:29:33.133424 7 log.go:172] (0xc004d4b080) Data frame received for 3 I0404 00:29:33.133458 7 log.go:172] (0xc001a1a1e0) (3) Data frame handling I0404 00:29:33.133490 7 log.go:172] (0xc001a1a1e0) (3) Data frame sent I0404 00:29:33.133511 7 log.go:172] (0xc004d4b080) Data frame received for 3 I0404 00:29:33.133528 7 log.go:172] (0xc001a1a1e0) (3) Data frame handling I0404 00:29:33.135224 7 log.go:172] (0xc004d4b080) Data frame received for 1 I0404 00:29:33.135249 7 log.go:172] (0xc001a1a000) (1) Data frame handling I0404 00:29:33.135273 7 log.go:172] (0xc001a1a000) (1) Data frame sent I0404 00:29:33.135288 7 log.go:172] (0xc004d4b080) (0xc001a1a000) Stream removed, broadcasting: 1 I0404 00:29:33.135358 7 log.go:172] (0xc004d4b080) Go away received I0404 00:29:33.135399 7 log.go:172] (0xc004d4b080) (0xc001a1a000) Stream removed, broadcasting: 1 I0404 00:29:33.135441 7 log.go:172] (0xc004d4b080) (0xc001a1a1e0) Stream removed, broadcasting: 3 I0404 00:29:33.135468 7 log.go:172] (0xc004d4b080) (0xc0025505a0) Stream removed, broadcasting: 5 Apr 4 00:29:33.135: INFO: Exec stderr: "" Apr 4 00:29:33.135: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:33.135: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:33.170588 7 log.go:172] (0xc002d209a0) (0xc001e888c0) Create stream I0404 00:29:33.170617 7 log.go:172] (0xc002d209a0) (0xc001e888c0) Stream added, broadcasting: 1 I0404 00:29:33.173198 7 log.go:172] (0xc002d209a0) Reply frame received for 1 I0404 00:29:33.173233 7 log.go:172] (0xc002d209a0) (0xc0018b5860) Create stream I0404 00:29:33.173243 7 log.go:172] (0xc002d209a0) (0xc0018b5860) Stream added, broadcasting: 3 I0404 00:29:33.174151 7 log.go:172] (0xc002d209a0) Reply frame received for 3 I0404 00:29:33.174192 7 log.go:172] (0xc002d209a0) (0xc002550640) Create stream I0404 00:29:33.174205 7 log.go:172] (0xc002d209a0) (0xc002550640) Stream added, broadcasting: 5 I0404 00:29:33.175149 7 log.go:172] (0xc002d209a0) Reply frame received for 5 I0404 00:29:33.240262 7 log.go:172] (0xc002d209a0) Data frame received for 3 I0404 00:29:33.240296 7 log.go:172] (0xc0018b5860) (3) Data frame handling I0404 00:29:33.240329 7 log.go:172] (0xc0018b5860) (3) Data frame sent I0404 00:29:33.240365 7 log.go:172] (0xc002d209a0) Data frame received for 5 I0404 00:29:33.240401 7 log.go:172] (0xc002550640) (5) Data frame handling I0404 00:29:33.240426 7 log.go:172] (0xc002d209a0) Data frame received for 3 I0404 00:29:33.240452 7 log.go:172] (0xc0018b5860) (3) Data frame handling I0404 00:29:33.242466 7 log.go:172] (0xc002d209a0) Data frame received for 1 I0404 00:29:33.242489 7 log.go:172] (0xc001e888c0) (1) Data frame handling I0404 00:29:33.242498 7 log.go:172] (0xc001e888c0) (1) Data frame sent I0404 00:29:33.242507 7 log.go:172] (0xc002d209a0) (0xc001e888c0) Stream removed, broadcasting: 1 I0404 00:29:33.242515 7 log.go:172] (0xc002d209a0) Go away received I0404 00:29:33.242652 7 log.go:172] (0xc002d209a0) (0xc001e888c0) Stream removed, broadcasting: 1 I0404 00:29:33.242667 7 log.go:172] (0xc002d209a0) (0xc0018b5860) Stream removed, broadcasting: 3 I0404 00:29:33.242672 7 log.go:172] (0xc002d209a0) (0xc002550640) Stream removed, broadcasting: 5 Apr 4 00:29:33.242: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 4 00:29:33.242: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:33.242: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:33.278316 7 log.go:172] (0xc002d20fd0) (0xc001e88aa0) Create stream I0404 00:29:33.278339 7 log.go:172] (0xc002d20fd0) (0xc001e88aa0) Stream added, broadcasting: 1 I0404 00:29:33.280001 7 log.go:172] (0xc002d20fd0) Reply frame received for 1 I0404 00:29:33.280041 7 log.go:172] (0xc002d20fd0) (0xc0018b5a40) Create stream I0404 00:29:33.280055 7 log.go:172] (0xc002d20fd0) (0xc0018b5a40) Stream added, broadcasting: 3 I0404 00:29:33.280922 7 log.go:172] (0xc002d20fd0) Reply frame received for 3 I0404 00:29:33.280955 7 log.go:172] (0xc002d20fd0) (0xc002810b40) Create stream I0404 00:29:33.280968 7 log.go:172] (0xc002d20fd0) (0xc002810b40) Stream added, broadcasting: 5 I0404 00:29:33.282107 7 log.go:172] (0xc002d20fd0) Reply frame received for 5 I0404 00:29:33.333719 7 log.go:172] (0xc002d20fd0) Data frame received for 5 I0404 00:29:33.333755 7 log.go:172] (0xc002810b40) (5) Data frame handling I0404 00:29:33.333817 7 log.go:172] (0xc002d20fd0) Data frame received for 3 I0404 00:29:33.333855 7 log.go:172] (0xc0018b5a40) (3) Data frame handling I0404 00:29:33.333878 7 log.go:172] (0xc0018b5a40) (3) Data frame sent I0404 00:29:33.333895 7 log.go:172] (0xc002d20fd0) Data frame received for 3 I0404 00:29:33.333907 7 log.go:172] (0xc0018b5a40) (3) Data frame handling I0404 00:29:33.335727 7 log.go:172] (0xc002d20fd0) Data frame received for 1 I0404 00:29:33.335750 7 log.go:172] (0xc001e88aa0) (1) Data frame handling I0404 00:29:33.335777 7 log.go:172] (0xc001e88aa0) (1) Data frame sent I0404 00:29:33.335819 7 log.go:172] (0xc002d20fd0) (0xc001e88aa0) Stream removed, broadcasting: 1 I0404 00:29:33.335855 7 log.go:172] (0xc002d20fd0) Go away received I0404 00:29:33.335995 7 log.go:172] (0xc002d20fd0) (0xc001e88aa0) Stream removed, broadcasting: 1 I0404 00:29:33.336026 7 log.go:172] (0xc002d20fd0) (0xc0018b5a40) Stream removed, broadcasting: 3 I0404 00:29:33.336039 7 log.go:172] (0xc002d20fd0) (0xc002810b40) Stream removed, broadcasting: 5 Apr 4 00:29:33.336: INFO: Exec stderr: "" Apr 4 00:29:33.336: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:33.336: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:33.375057 7 log.go:172] (0xc00277fad0) (0xc002550820) Create stream I0404 00:29:33.375087 7 log.go:172] (0xc00277fad0) (0xc002550820) Stream added, broadcasting: 1 I0404 00:29:33.376935 7 log.go:172] (0xc00277fad0) Reply frame received for 1 I0404 00:29:33.376964 7 log.go:172] (0xc00277fad0) (0xc0025508c0) Create stream I0404 00:29:33.376974 7 log.go:172] (0xc00277fad0) (0xc0025508c0) Stream added, broadcasting: 3 I0404 00:29:33.378196 7 log.go:172] (0xc00277fad0) Reply frame received for 3 I0404 00:29:33.378245 7 log.go:172] (0xc00277fad0) (0xc001e88b40) Create stream I0404 00:29:33.378259 7 log.go:172] (0xc00277fad0) (0xc001e88b40) Stream added, broadcasting: 5 I0404 00:29:33.379149 7 log.go:172] (0xc00277fad0) Reply frame received for 5 I0404 00:29:33.432503 7 log.go:172] (0xc00277fad0) Data frame received for 3 I0404 00:29:33.432521 7 log.go:172] (0xc0025508c0) (3) Data frame handling I0404 00:29:33.432529 7 log.go:172] (0xc0025508c0) (3) Data frame sent I0404 00:29:33.432537 7 log.go:172] (0xc00277fad0) Data frame received for 3 I0404 00:29:33.432543 7 log.go:172] (0xc0025508c0) (3) Data frame handling I0404 00:29:33.432795 7 log.go:172] (0xc00277fad0) Data frame received for 5 I0404 00:29:33.432838 7 log.go:172] (0xc001e88b40) (5) Data frame handling I0404 00:29:33.434705 7 log.go:172] (0xc00277fad0) Data frame received for 1 I0404 00:29:33.434751 7 log.go:172] (0xc002550820) (1) Data frame handling I0404 00:29:33.434780 7 log.go:172] (0xc002550820) (1) Data frame sent I0404 00:29:33.434811 7 log.go:172] (0xc00277fad0) (0xc002550820) Stream removed, broadcasting: 1 I0404 00:29:33.434846 7 log.go:172] (0xc00277fad0) Go away received I0404 00:29:33.434941 7 log.go:172] (0xc00277fad0) (0xc002550820) Stream removed, broadcasting: 1 I0404 00:29:33.434973 7 log.go:172] (0xc00277fad0) (0xc0025508c0) Stream removed, broadcasting: 3 I0404 00:29:33.434996 7 log.go:172] (0xc00277fad0) (0xc001e88b40) Stream removed, broadcasting: 5 Apr 4 00:29:33.435: INFO: Exec stderr: "" Apr 4 00:29:33.435: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:33.435: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:33.471110 7 log.go:172] (0xc002d21600) (0xc001e88f00) Create stream I0404 00:29:33.471138 7 log.go:172] (0xc002d21600) (0xc001e88f00) Stream added, broadcasting: 1 I0404 00:29:33.473560 7 log.go:172] (0xc002d21600) Reply frame received for 1 I0404 00:29:33.473604 7 log.go:172] (0xc002d21600) (0xc001e88fa0) Create stream I0404 00:29:33.473621 7 log.go:172] (0xc002d21600) (0xc001e88fa0) Stream added, broadcasting: 3 I0404 00:29:33.474698 7 log.go:172] (0xc002d21600) Reply frame received for 3 I0404 00:29:33.474745 7 log.go:172] (0xc002d21600) (0xc0018b5ae0) Create stream I0404 00:29:33.474760 7 log.go:172] (0xc002d21600) (0xc0018b5ae0) Stream added, broadcasting: 5 I0404 00:29:33.475694 7 log.go:172] (0xc002d21600) Reply frame received for 5 I0404 00:29:33.538757 7 log.go:172] (0xc002d21600) Data frame received for 3 I0404 00:29:33.538813 7 log.go:172] (0xc001e88fa0) (3) Data frame handling I0404 00:29:33.538873 7 log.go:172] (0xc002d21600) Data frame received for 5 I0404 00:29:33.538935 7 log.go:172] (0xc0018b5ae0) (5) Data frame handling I0404 00:29:33.538977 7 log.go:172] (0xc001e88fa0) (3) Data frame sent I0404 00:29:33.539001 7 log.go:172] (0xc002d21600) Data frame received for 3 I0404 00:29:33.539019 7 log.go:172] (0xc001e88fa0) (3) Data frame handling I0404 00:29:33.541423 7 log.go:172] (0xc002d21600) Data frame received for 1 I0404 00:29:33.541464 7 log.go:172] (0xc001e88f00) (1) Data frame handling I0404 00:29:33.541500 7 log.go:172] (0xc001e88f00) (1) Data frame sent I0404 00:29:33.541519 7 log.go:172] (0xc002d21600) (0xc001e88f00) Stream removed, broadcasting: 1 I0404 00:29:33.541546 7 log.go:172] (0xc002d21600) Go away received I0404 00:29:33.541789 7 log.go:172] (0xc002d21600) (0xc001e88f00) Stream removed, broadcasting: 1 I0404 00:29:33.541831 7 log.go:172] (0xc002d21600) (0xc001e88fa0) Stream removed, broadcasting: 3 I0404 00:29:33.541851 7 log.go:172] (0xc002d21600) (0xc0018b5ae0) Stream removed, broadcasting: 5 Apr 4 00:29:33.541: INFO: Exec stderr: "" Apr 4 00:29:33.541: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2422 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:29:33.541: INFO: >>> kubeConfig: /root/.kube/config I0404 00:29:33.578560 7 log.go:172] (0xc002af62c0) (0xc002550c80) Create stream I0404 00:29:33.578591 7 log.go:172] (0xc002af62c0) (0xc002550c80) Stream added, broadcasting: 1 I0404 00:29:33.580926 7 log.go:172] (0xc002af62c0) Reply frame received for 1 I0404 00:29:33.580973 7 log.go:172] (0xc002af62c0) (0xc001e890e0) Create stream I0404 00:29:33.580991 7 log.go:172] (0xc002af62c0) (0xc001e890e0) Stream added, broadcasting: 3 I0404 00:29:33.582256 7 log.go:172] (0xc002af62c0) Reply frame received for 3 I0404 00:29:33.582287 7 log.go:172] (0xc002af62c0) (0xc001e89220) Create stream I0404 00:29:33.582298 7 log.go:172] (0xc002af62c0) (0xc001e89220) Stream added, broadcasting: 5 I0404 00:29:33.583167 7 log.go:172] (0xc002af62c0) Reply frame received for 5 I0404 00:29:33.648608 7 log.go:172] (0xc002af62c0) Data frame received for 3 I0404 00:29:33.648653 7 log.go:172] (0xc001e890e0) (3) Data frame handling I0404 00:29:33.648675 7 log.go:172] (0xc001e890e0) (3) Data frame sent I0404 00:29:33.648698 7 log.go:172] (0xc002af62c0) Data frame received for 3 I0404 00:29:33.648713 7 log.go:172] (0xc001e890e0) (3) Data frame handling I0404 00:29:33.648735 7 log.go:172] (0xc002af62c0) Data frame received for 5 I0404 00:29:33.648754 7 log.go:172] (0xc001e89220) (5) Data frame handling I0404 00:29:33.650031 7 log.go:172] (0xc002af62c0) Data frame received for 1 I0404 00:29:33.650051 7 log.go:172] (0xc002550c80) (1) Data frame handling I0404 00:29:33.650072 7 log.go:172] (0xc002550c80) (1) Data frame sent I0404 00:29:33.650187 7 log.go:172] (0xc002af62c0) (0xc002550c80) Stream removed, broadcasting: 1 I0404 00:29:33.650277 7 log.go:172] (0xc002af62c0) (0xc002550c80) Stream removed, broadcasting: 1 I0404 00:29:33.650304 7 log.go:172] (0xc002af62c0) (0xc001e890e0) Stream removed, broadcasting: 3 I0404 00:29:33.650369 7 log.go:172] (0xc002af62c0) Go away received I0404 00:29:33.650537 7 log.go:172] (0xc002af62c0) (0xc001e89220) Stream removed, broadcasting: 5 Apr 4 00:29:33.650: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:33.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2422" for this suite. • [SLOW TEST:11.202 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3798,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:33.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 4 00:29:33.733: INFO: Waiting up to 5m0s for pod "pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb" in namespace "emptydir-1759" to be "Succeeded or Failed" Apr 4 00:29:33.736: INFO: Pod "pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332428ms Apr 4 00:29:35.739: INFO: Pod "pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005766368s Apr 4 00:29:37.743: INFO: Pod "pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009703216s STEP: Saw pod success Apr 4 00:29:37.743: INFO: Pod "pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb" satisfied condition "Succeeded or Failed" Apr 4 00:29:37.746: INFO: Trying to get logs from node latest-worker2 pod pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb container test-container: STEP: delete the pod Apr 4 00:29:37.767: INFO: Waiting for pod pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb to disappear Apr 4 00:29:37.771: INFO: Pod pod-fc4f8262-b3c4-421c-9b72-fe1bc0f184fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:37.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1759" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:37.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Apr 4 00:29:37.833: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:44.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1862" for this suite. • [SLOW TEST:7.182 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":208,"skipped":3841,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:44.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Apr 4 00:29:45.054: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:45.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4810" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":209,"skipped":3870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:45.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:29:45.894: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:29:47.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556985, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556985, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556985, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721556985, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:29:50.955: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:29:51.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9520" for this suite. STEP: Destroying namespace "webhook-9520-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.240 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":210,"skipped":3919,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:29:51.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:30:04.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5132" for this suite. • [SLOW TEST:13.141 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":211,"skipped":3929,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:30:04.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8359/configmap-test-e1674cd5-3351-420c-b7e7-660b34587f2a STEP: Creating a pod to test consume configMaps Apr 4 00:30:04.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e" in namespace "configmap-8359" to be "Succeeded or Failed" Apr 4 00:30:04.627: INFO: Pod "pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.527544ms Apr 4 00:30:06.631: INFO: Pod "pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018275298s Apr 4 00:30:08.646: INFO: Pod "pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033818591s STEP: Saw pod success Apr 4 00:30:08.646: INFO: Pod "pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e" satisfied condition "Succeeded or Failed" Apr 4 00:30:08.650: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e container env-test: STEP: delete the pod Apr 4 00:30:08.666: INFO: Waiting for pod pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e to disappear Apr 4 00:30:08.670: INFO: Pod pod-configmaps-4f4ffd4f-2417-4fa1-9e91-9866a2a4238e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:30:08.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8359" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3939,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:30:08.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:30:15.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9722" for this suite. • [SLOW TEST:7.085 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":213,"skipped":3939,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:30:15.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:30:16.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:30:18.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721557016, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721557016, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721557016, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721557016, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:30:21.472: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:30:21.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4617" for this suite. STEP: Destroying namespace "webhook-4617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.363 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":214,"skipped":3940,"failed":0} SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:30:22.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-30475808-6329-485d-8edb-a4ae64121731 STEP: Creating configMap with name cm-test-opt-upd-465411c9-d982-4e3f-8a54-e908569d136f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-30475808-6329-485d-8edb-a4ae64121731 STEP: Updating configmap cm-test-opt-upd-465411c9-d982-4e3f-8a54-e908569d136f STEP: Creating configMap with name cm-test-opt-create-e49addb0-1b45-4949-84a6-328ce6b9423f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:31:44.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4672" for this suite. • [SLOW TEST:82.617 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3942,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:31:44.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0404 00:31:54.813851 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 00:31:54.813: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:31:54.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1111" for this suite. • [SLOW TEST:10.078 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":216,"skipped":3943,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:31:54.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:31:54.876: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 4 00:31:59.880: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 00:31:59.880: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Apr 4 00:31:59.900: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-98 /apis/apps/v1/namespaces/deployment-98/deployments/test-cleanup-deployment 507a6120-2a94-4324-81ff-3b68df8f4bc4 5209476 1 2020-04-04 00:31:59 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003548ec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 4 00:31:59.956: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-98 /apis/apps/v1/namespaces/deployment-98/replicasets/test-cleanup-deployment-577c77b589 e2260455-99ec-49d2-90c0-162d8a1851f1 5209478 1 2020-04-04 00:31:59 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 507a6120-2a94-4324-81ff-3b68df8f4bc4 0xc003c700e7 0xc003c700e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c70158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:31:59.956: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 4 00:31:59.956: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-98 /apis/apps/v1/namespaces/deployment-98/replicasets/test-cleanup-controller 4f1c2486-f7fe-4ad2-b318-d0f0a2dc4476 5209477 1 2020-04-04 00:31:54 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 507a6120-2a94-4324-81ff-3b68df8f4bc4 0xc003c70017 0xc003c70018}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003c70078 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 4 00:32:00.019: INFO: Pod "test-cleanup-controller-mrsrg" is available: &Pod{ObjectMeta:{test-cleanup-controller-mrsrg test-cleanup-controller- deployment-98 /api/v1/namespaces/deployment-98/pods/test-cleanup-controller-mrsrg 60b26c6a-35da-47f8-993e-8d9aefbf8da9 5209466 0 2020-04-04 00:31:54 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 4f1c2486-f7fe-4ad2-b318-d0f0a2dc4476 0xc003c70617 0xc003c70618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s4rsv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s4rsv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s4rsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:31:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:31:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.99,StartTime:2020-04-04 00:31:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 00:31:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3006d5647004262d236756c34a64a8332918ffae922a912fd73eed2f80ef0afd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 00:32:00.019: INFO: Pod "test-cleanup-deployment-577c77b589-7kwn4" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-7kwn4 test-cleanup-deployment-577c77b589- deployment-98 /api/v1/namespaces/deployment-98/pods/test-cleanup-deployment-577c77b589-7kwn4 5eca9e9a-1bcf-47dd-b76c-481f7ad2fac3 5209484 0 2020-04-04 00:31:59 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 e2260455-99ec-49d2-90c0-162d8a1851f1 0xc003c707a7 0xc003c707a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s4rsv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s4rsv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s4rsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 00:31:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:00.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-98" for this suite. • [SLOW TEST:5.234 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":217,"skipped":3955,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:00.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:32:00.128: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:06.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2115" for this suite. • [SLOW TEST:6.369 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":218,"skipped":3966,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:06.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:32:06.506: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e573fc75-96c6-4f58-a2bf-fd6f81da230f" in namespace "security-context-test-269" to be "Succeeded or Failed" Apr 4 00:32:06.512: INFO: Pod "alpine-nnp-false-e573fc75-96c6-4f58-a2bf-fd6f81da230f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.894325ms Apr 4 00:32:08.515: INFO: Pod "alpine-nnp-false-e573fc75-96c6-4f58-a2bf-fd6f81da230f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008905701s Apr 4 00:32:10.519: INFO: Pod "alpine-nnp-false-e573fc75-96c6-4f58-a2bf-fd6f81da230f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01289143s Apr 4 00:32:10.519: INFO: Pod "alpine-nnp-false-e573fc75-96c6-4f58-a2bf-fd6f81da230f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:10.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-269" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3997,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:10.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:10.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7392" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":220,"skipped":4002,"failed":0} ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:10.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:10.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2692" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":221,"skipped":4002,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:10.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-6hbvv in namespace proxy-9770 I0404 00:32:10.881786 7 runners.go:190] Created replication controller with name: proxy-service-6hbvv, namespace: proxy-9770, replica count: 1 I0404 00:32:11.932287 7 runners.go:190] proxy-service-6hbvv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:32:12.932494 7 runners.go:190] proxy-service-6hbvv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:32:13.932801 7 runners.go:190] proxy-service-6hbvv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:32:14.933019 7 runners.go:190] proxy-service-6hbvv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 00:32:15.933240 7 runners.go:190] proxy-service-6hbvv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 00:32:16.933510 7 runners.go:190] proxy-service-6hbvv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 00:32:17.933755 7 runners.go:190] proxy-service-6hbvv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 00:32:17.938: INFO: setup took 7.151060116s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 4 00:32:17.945: INFO: (0) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 7.108718ms) Apr 4 00:32:17.945: INFO: (0) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 7.128375ms) Apr 4 00:32:17.945: INFO: (0) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 7.1665ms) Apr 4 00:32:17.946: INFO: (0) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 8.166461ms) Apr 4 00:32:17.946: INFO: (0) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 8.266748ms) Apr 4 00:32:17.949: INFO: (0) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 10.682699ms) Apr 4 00:32:17.950: INFO: (0) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 12.229891ms) Apr 4 00:32:17.950: INFO: (0) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 12.186487ms) Apr 4 00:32:17.951: INFO: (0) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 12.401549ms) Apr 4 00:32:17.951: INFO: (0) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 12.580069ms) Apr 4 00:32:17.952: INFO: (0) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 14.156783ms) Apr 4 00:32:17.953: INFO: (0) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 15.021491ms) Apr 4 00:32:17.953: INFO: (0) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 14.907121ms) Apr 4 00:32:17.956: INFO: (0) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 18.065964ms) Apr 4 00:32:17.956: INFO: (0) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 17.747258ms) Apr 4 00:32:17.957: INFO: (0) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 4.249033ms) Apr 4 00:32:17.961: INFO: (1) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 4.268469ms) Apr 4 00:32:17.961: INFO: (1) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 4.323867ms) Apr 4 00:32:17.962: INFO: (1) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test<... (200; 5.24008ms) Apr 4 00:32:17.962: INFO: (1) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 5.739143ms) Apr 4 00:32:17.962: INFO: (1) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 5.597262ms) Apr 4 00:32:17.962: INFO: (1) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 5.696001ms) Apr 4 00:32:17.962: INFO: (1) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 5.559279ms) Apr 4 00:32:17.963: INFO: (1) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 5.798573ms) Apr 4 00:32:17.963: INFO: (1) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 6.030866ms) Apr 4 00:32:17.963: INFO: (1) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 6.318641ms) Apr 4 00:32:17.963: INFO: (1) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 6.161887ms) Apr 4 00:32:17.973: INFO: (2) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 10.505515ms) Apr 4 00:32:17.974: INFO: (2) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 10.462802ms) Apr 4 00:32:17.974: INFO: (2) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 10.51178ms) Apr 4 00:32:17.974: INFO: (2) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 10.503231ms) Apr 4 00:32:17.974: INFO: (2) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 11.236791ms) Apr 4 00:32:17.975: INFO: (2) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 11.494568ms) Apr 4 00:32:17.975: INFO: (2) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 11.522492ms) Apr 4 00:32:17.975: INFO: (2) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 12.422142ms) Apr 4 00:32:17.976: INFO: (2) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 13.28616ms) Apr 4 00:32:17.977: INFO: (2) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 13.495121ms) Apr 4 00:32:17.977: INFO: (2) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 13.534298ms) Apr 4 00:32:17.977: INFO: (2) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 13.476763ms) Apr 4 00:32:17.977: INFO: (2) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 13.547975ms) Apr 4 00:32:17.977: INFO: (2) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 13.813844ms) Apr 4 00:32:17.980: INFO: (3) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 2.920916ms) Apr 4 00:32:17.980: INFO: (3) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 2.879244ms) Apr 4 00:32:17.980: INFO: (3) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 3.036646ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 5.553864ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 5.904938ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 5.961021ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 5.951704ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 6.031835ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 6.015915ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 6.028713ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 6.155904ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 6.223704ms) Apr 4 00:32:17.983: INFO: (3) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 6.275301ms) Apr 4 00:32:17.986: INFO: (4) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 2.49608ms) Apr 4 00:32:17.986: INFO: (4) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 2.521314ms) Apr 4 00:32:17.987: INFO: (4) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.863018ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 4.132494ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 4.15649ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 4.261286ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 4.433985ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.466638ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 4.749934ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.746456ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 4.750401ms) Apr 4 00:32:17.988: INFO: (4) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 4.748397ms) Apr 4 00:32:17.989: INFO: (4) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 5.826188ms) Apr 4 00:32:17.989: INFO: (4) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 6.009095ms) Apr 4 00:32:17.989: INFO: (4) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 6.028614ms) Apr 4 00:32:17.993: INFO: (5) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.758096ms) Apr 4 00:32:17.993: INFO: (5) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 3.826602ms) Apr 4 00:32:17.993: INFO: (5) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 3.913353ms) Apr 4 00:32:17.993: INFO: (5) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.851936ms) Apr 4 00:32:17.993: INFO: (5) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 3.950619ms) Apr 4 00:32:17.993: INFO: (5) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 3.898783ms) Apr 4 00:32:17.994: INFO: (5) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.987734ms) Apr 4 00:32:17.994: INFO: (5) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 3.879203ms) Apr 4 00:32:17.994: INFO: (5) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 4.125988ms) Apr 4 00:32:17.994: INFO: (5) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 4.116722ms) Apr 4 00:32:17.994: INFO: (5) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 3.165867ms) Apr 4 00:32:17.998: INFO: (6) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.165834ms) Apr 4 00:32:17.999: INFO: (6) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 4.348938ms) Apr 4 00:32:17.999: INFO: (6) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test<... (200; 5.125625ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 5.255093ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 5.176223ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 5.213972ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 5.331706ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 5.216189ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 5.22458ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 5.244065ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 5.246758ms) Apr 4 00:32:18.000: INFO: (6) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 5.459222ms) Apr 4 00:32:18.002: INFO: (7) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 2.213895ms) Apr 4 00:32:18.003: INFO: (7) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 5.108049ms) Apr 4 00:32:18.005: INFO: (7) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 5.176317ms) Apr 4 00:32:18.005: INFO: (7) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 5.142292ms) Apr 4 00:32:18.005: INFO: (7) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 5.227456ms) Apr 4 00:32:18.005: INFO: (7) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 5.35633ms) Apr 4 00:32:18.006: INFO: (7) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 5.691163ms) Apr 4 00:32:18.006: INFO: (7) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 5.833408ms) Apr 4 00:32:18.006: INFO: (7) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 5.831158ms) Apr 4 00:32:18.006: INFO: (7) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 5.92193ms) Apr 4 00:32:18.006: INFO: (7) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 5.977996ms) Apr 4 00:32:18.006: INFO: (7) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 5.948706ms) Apr 4 00:32:18.006: INFO: (7) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 6.024342ms) Apr 4 00:32:18.009: INFO: (8) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.009381ms) Apr 4 00:32:18.009: INFO: (8) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 2.990546ms) Apr 4 00:32:18.009: INFO: (8) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 3.002751ms) Apr 4 00:32:18.009: INFO: (8) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 2.973903ms) Apr 4 00:32:18.009: INFO: (8) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test (200; 4.033271ms) Apr 4 00:32:18.010: INFO: (8) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.051432ms) Apr 4 00:32:18.010: INFO: (8) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 4.107404ms) Apr 4 00:32:18.010: INFO: (8) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 4.151049ms) Apr 4 00:32:18.011: INFO: (8) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 4.31905ms) Apr 4 00:32:18.011: INFO: (8) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 4.532946ms) Apr 4 00:32:18.011: INFO: (8) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 4.544816ms) Apr 4 00:32:18.011: INFO: (8) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 4.648067ms) Apr 4 00:32:18.011: INFO: (8) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 4.692586ms) Apr 4 00:32:18.011: INFO: (8) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 4.678707ms) Apr 4 00:32:18.011: INFO: (8) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 4.935602ms) Apr 4 00:32:18.014: INFO: (9) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 3.107982ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 3.23775ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 3.576697ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.546724ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 3.957665ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 3.983927ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 3.994251ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 3.98232ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 4.244005ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 4.200675ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.281615ms) Apr 4 00:32:18.015: INFO: (9) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 4.191635ms) Apr 4 00:32:18.016: INFO: (9) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 4.258177ms) Apr 4 00:32:18.016: INFO: (9) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 4.284365ms) Apr 4 00:32:18.016: INFO: (9) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.232069ms) Apr 4 00:32:18.016: INFO: (9) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test<... (200; 3.628523ms) Apr 4 00:32:18.019: INFO: (10) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 3.662751ms) Apr 4 00:32:18.019: INFO: (10) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.694366ms) Apr 4 00:32:18.019: INFO: (10) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 3.766351ms) Apr 4 00:32:18.020: INFO: (10) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.862631ms) Apr 4 00:32:18.020: INFO: (10) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 4.045541ms) Apr 4 00:32:18.020: INFO: (10) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test (200; 4.350723ms) Apr 4 00:32:18.021: INFO: (10) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 5.180217ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.403326ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 3.364509ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.394313ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.338615ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 3.412702ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 3.466747ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.491918ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 3.527384ms) Apr 4 00:32:18.024: INFO: (11) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test<... (200; 4.042897ms) Apr 4 00:32:18.030: INFO: (12) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 4.224465ms) Apr 4 00:32:18.030: INFO: (12) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 4.347984ms) Apr 4 00:32:18.030: INFO: (12) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 4.330656ms) Apr 4 00:32:18.030: INFO: (12) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.269215ms) Apr 4 00:32:18.030: INFO: (12) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 4.242826ms) Apr 4 00:32:18.030: INFO: (12) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 2.928626ms) Apr 4 00:32:18.034: INFO: (13) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.103248ms) Apr 4 00:32:18.034: INFO: (13) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.223311ms) Apr 4 00:32:18.035: INFO: (13) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 3.28993ms) Apr 4 00:32:18.039: INFO: (13) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 8.054161ms) Apr 4 00:32:18.039: INFO: (13) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 8.128733ms) Apr 4 00:32:18.040: INFO: (13) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 8.25219ms) Apr 4 00:32:18.040: INFO: (13) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 8.327421ms) Apr 4 00:32:18.040: INFO: (13) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 8.377374ms) Apr 4 00:32:18.040: INFO: (13) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 8.816336ms) Apr 4 00:32:18.041: INFO: (13) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 9.904859ms) Apr 4 00:32:18.041: INFO: (13) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 10.013599ms) Apr 4 00:32:18.041: INFO: (13) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 9.954331ms) Apr 4 00:32:18.041: INFO: (13) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 10.215938ms) Apr 4 00:32:18.045: INFO: (14) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.301464ms) Apr 4 00:32:18.045: INFO: (14) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 3.434394ms) Apr 4 00:32:18.045: INFO: (14) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test<... (200; 4.774847ms) Apr 4 00:32:18.046: INFO: (14) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 4.835846ms) Apr 4 00:32:18.046: INFO: (14) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 4.908334ms) Apr 4 00:32:18.046: INFO: (14) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 4.961289ms) Apr 4 00:32:18.046: INFO: (14) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 4.876266ms) Apr 4 00:32:18.047: INFO: (14) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 4.91588ms) Apr 4 00:32:18.047: INFO: (14) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 4.935507ms) Apr 4 00:32:18.050: INFO: (15) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 2.665741ms) Apr 4 00:32:18.050: INFO: (15) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 2.841443ms) Apr 4 00:32:18.050: INFO: (15) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 2.842536ms) Apr 4 00:32:18.050: INFO: (15) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 2.861336ms) Apr 4 00:32:18.051: INFO: (15) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 3.855002ms) Apr 4 00:32:18.051: INFO: (15) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.84796ms) Apr 4 00:32:18.051: INFO: (15) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 4.2869ms) Apr 4 00:32:18.051: INFO: (15) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 4.659033ms) Apr 4 00:32:18.051: INFO: (15) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 4.617247ms) Apr 4 00:32:18.051: INFO: (15) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 4.681343ms) Apr 4 00:32:18.051: INFO: (15) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 4.710958ms) Apr 4 00:32:18.052: INFO: (15) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 3.247404ms) Apr 4 00:32:18.055: INFO: (16) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 3.290324ms) Apr 4 00:32:18.056: INFO: (16) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test<... (200; 4.816957ms) Apr 4 00:32:18.057: INFO: (16) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 4.840679ms) Apr 4 00:32:18.057: INFO: (16) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 4.863924ms) Apr 4 00:32:18.057: INFO: (16) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 5.416303ms) Apr 4 00:32:18.058: INFO: (16) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 5.693269ms) Apr 4 00:32:18.058: INFO: (16) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 5.733715ms) Apr 4 00:32:18.058: INFO: (16) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 5.710015ms) Apr 4 00:32:18.061: INFO: (17) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 3.46988ms) Apr 4 00:32:18.061: INFO: (17) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 3.467176ms) Apr 4 00:32:18.061: INFO: (17) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 3.384879ms) Apr 4 00:32:18.061: INFO: (17) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test (200; 3.495927ms) Apr 4 00:32:18.061: INFO: (17) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 3.499224ms) Apr 4 00:32:18.061: INFO: (17) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 3.471571ms) Apr 4 00:32:18.062: INFO: (17) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 3.775863ms) Apr 4 00:32:18.063: INFO: (17) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.789146ms) Apr 4 00:32:18.063: INFO: (17) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 4.861071ms) Apr 4 00:32:18.063: INFO: (17) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 4.824088ms) Apr 4 00:32:18.063: INFO: (17) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 4.920892ms) Apr 4 00:32:18.063: INFO: (17) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 5.075128ms) Apr 4 00:32:18.063: INFO: (17) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 5.159975ms) Apr 4 00:32:18.063: INFO: (17) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 5.116162ms) Apr 4 00:32:18.066: INFO: (18) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: ... (200; 2.605576ms) Apr 4 00:32:18.066: INFO: (18) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 2.74003ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 6.393435ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 6.440914ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 6.41359ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 6.625856ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 6.704357ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:460/proxy/: tls baz (200; 6.663147ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 6.673344ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:1080/proxy/: test<... (200; 6.704756ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 6.689215ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 6.699959ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 6.816036ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 7.140828ms) Apr 4 00:32:18.070: INFO: (18) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 7.108874ms) Apr 4 00:32:18.073: INFO: (19) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 2.911719ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct/proxy/: test (200; 4.181479ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:162/proxy/: bar (200; 4.240472ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:1080/proxy/: ... (200; 4.253228ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname2/proxy/: bar (200; 4.297286ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/pods/http:proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.281653ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:443/proxy/: test<... (200; 4.38132ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname2/proxy/: bar (200; 4.425418ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/services/proxy-service-6hbvv:portname1/proxy/: foo (200; 4.443211ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/services/http:proxy-service-6hbvv:portname1/proxy/: foo (200; 4.457933ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/pods/https:proxy-service-6hbvv-6kfct:462/proxy/: tls qux (200; 4.571239ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname1/proxy/: tls baz (200; 4.498613ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/services/https:proxy-service-6hbvv:tlsportname2/proxy/: tls qux (200; 4.599775ms) Apr 4 00:32:18.075: INFO: (19) /api/v1/namespaces/proxy-9770/pods/proxy-service-6hbvv-6kfct:160/proxy/: foo (200; 4.816691ms) STEP: deleting ReplicationController proxy-service-6hbvv in namespace proxy-9770, will wait for the garbage collector to delete the pods Apr 4 00:32:18.134: INFO: Deleting ReplicationController proxy-service-6hbvv took: 6.580698ms Apr 4 00:32:18.234: INFO: Terminating ReplicationController proxy-service-6hbvv pods took: 100.199292ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:22.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9770" for this suite. • [SLOW TEST:12.102 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":222,"skipped":4019,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:22.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Apr 4 00:32:22.938: INFO: Waiting up to 5m0s for pod "client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de" in namespace "containers-3047" to be "Succeeded or Failed" Apr 4 00:32:22.957: INFO: Pod "client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de": Phase="Pending", Reason="", readiness=false. Elapsed: 18.341708ms Apr 4 00:32:24.961: INFO: Pod "client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022622879s Apr 4 00:32:26.965: INFO: Pod "client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026835359s STEP: Saw pod success Apr 4 00:32:26.965: INFO: Pod "client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de" satisfied condition "Succeeded or Failed" Apr 4 00:32:26.968: INFO: Trying to get logs from node latest-worker pod client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de container test-container: STEP: delete the pod Apr 4 00:32:26.987: INFO: Waiting for pod client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de to disappear Apr 4 00:32:26.991: INFO: Pod client-containers-5bf71225-52dc-4a60-b21d-6b4949f008de no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:26.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3047" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":4029,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:26.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-c9b7b0f3-3268-4307-966b-4e8a540f4efa STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:32:31.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7426" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":4039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:32:31.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-a89f387e-5073-498c-a54b-060297ea8a4d in namespace container-probe-7670 Apr 4 00:32:35.203: INFO: Started pod busybox-a89f387e-5073-498c-a54b-060297ea8a4d in namespace container-probe-7670 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 00:32:35.207: INFO: Initial restart count of pod busybox-a89f387e-5073-498c-a54b-060297ea8a4d is 0 Apr 4 00:33:27.317: INFO: Restart count of pod container-probe-7670/busybox-a89f387e-5073-498c-a54b-060297ea8a4d is now 1 (52.110509274s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:33:27.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7670" for this suite. • [SLOW TEST:56.258 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":4088,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:33:27.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:33:27.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 4 00:33:28.012: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T00:33:28Z generation:1 name:name1 resourceVersion:5210030 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f369b42e-cfc5-487b-b269-68475ec8dd62] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 4 00:33:38.017: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T00:33:38Z generation:1 name:name2 resourceVersion:5210072 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:74aa954b-91c9-4d91-9af6-3a44f9bb3e04] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 4 00:33:48.022: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T00:33:28Z generation:2 name:name1 resourceVersion:5210102 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f369b42e-cfc5-487b-b269-68475ec8dd62] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 4 00:33:58.028: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T00:33:38Z generation:2 name:name2 resourceVersion:5210130 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:74aa954b-91c9-4d91-9af6-3a44f9bb3e04] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 4 00:34:08.038: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T00:33:28Z generation:2 name:name1 resourceVersion:5210160 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f369b42e-cfc5-487b-b269-68475ec8dd62] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 4 00:34:18.046: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T00:33:38Z generation:2 name:name2 resourceVersion:5210190 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:74aa954b-91c9-4d91-9af6-3a44f9bb3e04] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:34:28.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4063" for this suite. • [SLOW TEST:61.187 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":226,"skipped":4097,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:34:28.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-11d476a5-cfc6-4398-b1e6-225f5d31dde7 STEP: Creating a pod to test consume secrets Apr 4 00:34:28.681: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081" in namespace "projected-5266" to be "Succeeded or Failed" Apr 4 00:34:28.686: INFO: Pod "pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081": Phase="Pending", Reason="", readiness=false. Elapsed: 4.458713ms Apr 4 00:34:30.715: INFO: Pod "pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033731353s Apr 4 00:34:32.720: INFO: Pod "pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038379266s STEP: Saw pod success Apr 4 00:34:32.720: INFO: Pod "pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081" satisfied condition "Succeeded or Failed" Apr 4 00:34:32.723: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081 container projected-secret-volume-test: STEP: delete the pod Apr 4 00:34:32.788: INFO: Waiting for pod pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081 to disappear Apr 4 00:34:32.794: INFO: Pod pod-projected-secrets-263c3a07-2690-4c52-99b7-8065c2d97081 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:34:32.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5266" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":4109,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:34:32.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 4 00:34:32.874: INFO: Waiting up to 5m0s for pod "downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca" in namespace "downward-api-9888" to be "Succeeded or Failed" Apr 4 00:34:32.878: INFO: Pod "downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.903608ms Apr 4 00:34:34.882: INFO: Pod "downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00825853s Apr 4 00:34:36.890: INFO: Pod "downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016256016s STEP: Saw pod success Apr 4 00:34:36.890: INFO: Pod "downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca" satisfied condition "Succeeded or Failed" Apr 4 00:34:36.892: INFO: Trying to get logs from node latest-worker2 pod downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca container dapi-container: STEP: delete the pod Apr 4 00:34:36.910: INFO: Waiting for pod downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca to disappear Apr 4 00:34:36.915: INFO: Pod downward-api-b9d51ebe-8140-41ca-a6a3-afd23f772bca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:34:36.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9888" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":4131,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:34:36.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4030 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4030 STEP: creating replication controller externalsvc in namespace services-4030 I0404 00:34:37.065384 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4030, replica count: 2 I0404 00:34:40.115770 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 00:34:43.116053 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 4 00:34:43.156: INFO: Creating new exec pod Apr 4 00:34:47.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4030 execpodtrlrn -- /bin/sh -x -c nslookup clusterip-service' Apr 4 00:34:50.036: INFO: stderr: "I0404 00:34:49.941317 2842 log.go:172] (0xc00003a6e0) (0xc0007b40a0) Create stream\nI0404 00:34:49.941362 2842 log.go:172] (0xc00003a6e0) (0xc0007b40a0) Stream added, broadcasting: 1\nI0404 00:34:49.944061 2842 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0404 00:34:49.944104 2842 log.go:172] (0xc00003a6e0) (0xc0003a95e0) Create stream\nI0404 00:34:49.944118 2842 log.go:172] (0xc00003a6e0) (0xc0003a95e0) Stream added, broadcasting: 3\nI0404 00:34:49.945037 2842 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0404 00:34:49.945077 2842 log.go:172] (0xc00003a6e0) (0xc0007c0000) Create stream\nI0404 00:34:49.945088 2842 log.go:172] (0xc00003a6e0) (0xc0007c0000) Stream added, broadcasting: 5\nI0404 00:34:49.946238 2842 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0404 00:34:50.017602 2842 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0404 00:34:50.017628 2842 log.go:172] (0xc0007c0000) (5) Data frame handling\nI0404 00:34:50.017644 2842 log.go:172] (0xc0007c0000) (5) Data frame sent\n+ nslookup clusterip-service\nI0404 00:34:50.025806 2842 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0404 00:34:50.025848 2842 log.go:172] (0xc0003a95e0) (3) Data frame handling\nI0404 00:34:50.025868 2842 log.go:172] (0xc0003a95e0) (3) Data frame sent\nI0404 00:34:50.027452 2842 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0404 00:34:50.027477 2842 log.go:172] (0xc0003a95e0) (3) Data frame handling\nI0404 00:34:50.027495 2842 log.go:172] (0xc0003a95e0) (3) Data frame sent\nI0404 00:34:50.028946 2842 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0404 00:34:50.029009 2842 log.go:172] (0xc0003a95e0) (3) Data frame handling\nI0404 00:34:50.029051 2842 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0404 00:34:50.029070 2842 log.go:172] (0xc0007c0000) (5) Data frame handling\nI0404 00:34:50.030984 2842 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0404 00:34:50.031022 2842 log.go:172] (0xc0007b40a0) (1) Data frame handling\nI0404 00:34:50.031089 2842 log.go:172] (0xc0007b40a0) (1) Data frame sent\nI0404 00:34:50.031128 2842 log.go:172] (0xc00003a6e0) (0xc0007b40a0) Stream removed, broadcasting: 1\nI0404 00:34:50.031149 2842 log.go:172] (0xc00003a6e0) Go away received\nI0404 00:34:50.031661 2842 log.go:172] (0xc00003a6e0) (0xc0007b40a0) Stream removed, broadcasting: 1\nI0404 00:34:50.031686 2842 log.go:172] (0xc00003a6e0) (0xc0003a95e0) Stream removed, broadcasting: 3\nI0404 00:34:50.031699 2842 log.go:172] (0xc00003a6e0) (0xc0007c0000) Stream removed, broadcasting: 5\n" Apr 4 00:34:50.036: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4030.svc.cluster.local\tcanonical name = externalsvc.services-4030.svc.cluster.local.\nName:\texternalsvc.services-4030.svc.cluster.local\nAddress: 10.96.107.158\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4030, will wait for the garbage collector to delete the pods Apr 4 00:34:50.096: INFO: Deleting ReplicationController externalsvc took: 6.472057ms Apr 4 00:34:50.397: INFO: Terminating ReplicationController externalsvc pods took: 300.281614ms Apr 4 00:35:03.064: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:35:03.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4030" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:26.179 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":229,"skipped":4151,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:35:03.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2162 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2162 Apr 4 00:35:03.185: INFO: Found 0 stateful pods, waiting for 1 Apr 4 00:35:13.189: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 00:35:13.209: INFO: Deleting all statefulset in ns statefulset-2162 Apr 4 00:35:13.215: INFO: Scaling statefulset ss to 0 Apr 4 00:35:33.277: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 00:35:33.281: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:35:33.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2162" for this suite. • [SLOW TEST:30.199 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":230,"skipped":4151,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:35:33.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:35:33.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90" in namespace "downward-api-472" to be "Succeeded or Failed" Apr 4 00:35:33.394: INFO: Pod "downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90": Phase="Pending", Reason="", readiness=false. Elapsed: 32.101439ms Apr 4 00:35:35.398: INFO: Pod "downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036316968s Apr 4 00:35:37.401: INFO: Pod "downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039370142s STEP: Saw pod success Apr 4 00:35:37.402: INFO: Pod "downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90" satisfied condition "Succeeded or Failed" Apr 4 00:35:37.404: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90 container client-container: STEP: delete the pod Apr 4 00:35:37.443: INFO: Waiting for pod downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90 to disappear Apr 4 00:35:37.448: INFO: Pod downwardapi-volume-47796240-99c0-45dc-8670-3ba2cb285a90 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:35:37.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-472" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":4161,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:35:37.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-dc098458-869e-4bfc-883d-2dabe585c146 STEP: Creating a pod to test consume secrets Apr 4 00:35:37.549: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba" in namespace "projected-5651" to be "Succeeded or Failed" Apr 4 00:35:37.557: INFO: Pod "pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067377ms Apr 4 00:35:39.560: INFO: Pod "pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011437593s Apr 4 00:35:41.565: INFO: Pod "pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015926237s STEP: Saw pod success Apr 4 00:35:41.565: INFO: Pod "pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba" satisfied condition "Succeeded or Failed" Apr 4 00:35:41.568: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba container projected-secret-volume-test: STEP: delete the pod Apr 4 00:35:41.588: INFO: Waiting for pod pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba to disappear Apr 4 00:35:41.592: INFO: Pod pod-projected-secrets-f98fe154-b7b5-40c0-9df6-f3c76df4acba no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:35:41.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5651" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4175,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:35:41.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 4 00:35:46.204: INFO: Successfully updated pod "pod-update-f5557810-c99f-4b1a-98f7-c5a88e844059" STEP: verifying the updated pod is in kubernetes Apr 4 00:35:46.212: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:35:46.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6301" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4185,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:35:46.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0404 00:35:57.767057 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 00:35:57.767: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:35:57.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5432" for this suite. • [SLOW TEST:11.556 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":234,"skipped":4194,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:35:57.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:35:57.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adaed480-9330-4356-8924-64a261f29509" in namespace "downward-api-4812" to be "Succeeded or Failed" Apr 4 00:35:57.868: INFO: Pod "downwardapi-volume-adaed480-9330-4356-8924-64a261f29509": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07341ms Apr 4 00:35:59.872: INFO: Pod "downwardapi-volume-adaed480-9330-4356-8924-64a261f29509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006979107s Apr 4 00:36:01.876: INFO: Pod "downwardapi-volume-adaed480-9330-4356-8924-64a261f29509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011287785s STEP: Saw pod success Apr 4 00:36:01.876: INFO: Pod "downwardapi-volume-adaed480-9330-4356-8924-64a261f29509" satisfied condition "Succeeded or Failed" Apr 4 00:36:01.879: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-adaed480-9330-4356-8924-64a261f29509 container client-container: STEP: delete the pod Apr 4 00:36:01.984: INFO: Waiting for pod downwardapi-volume-adaed480-9330-4356-8924-64a261f29509 to disappear Apr 4 00:36:02.018: INFO: Pod downwardapi-volume-adaed480-9330-4356-8924-64a261f29509 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:36:02.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4812" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4200,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:36:02.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-3324 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 00:36:02.297: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 00:36:02.378: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:36:04.472: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:36:06.496: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:36:08.382: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:36:10.382: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:36:12.382: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:36:14.382: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:36:16.382: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:36:18.382: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:36:20.382: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 00:36:22.382: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 00:36:22.388: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 4 00:36:24.392: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 4 00:36:26.392: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 00:36:30.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.115:8080/dial?request=hostname&protocol=udp&host=10.244.2.62&port=8081&tries=1'] Namespace:pod-network-test-3324 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:36:30.433: INFO: >>> kubeConfig: /root/.kube/config I0404 00:36:30.474084 7 log.go:172] (0xc002686000) (0xc001ec4be0) Create stream I0404 00:36:30.474126 7 log.go:172] (0xc002686000) (0xc001ec4be0) Stream added, broadcasting: 1 I0404 00:36:30.480871 7 log.go:172] (0xc002686000) Reply frame received for 1 I0404 00:36:30.480927 7 log.go:172] (0xc002686000) (0xc0013d00a0) Create stream I0404 00:36:30.480954 7 log.go:172] (0xc002686000) (0xc0013d00a0) Stream added, broadcasting: 3 I0404 00:36:30.483184 7 log.go:172] (0xc002686000) Reply frame received for 3 I0404 00:36:30.483217 7 log.go:172] (0xc002686000) (0xc0013d0140) Create stream I0404 00:36:30.483229 7 log.go:172] (0xc002686000) (0xc0013d0140) Stream added, broadcasting: 5 I0404 00:36:30.483935 7 log.go:172] (0xc002686000) Reply frame received for 5 I0404 00:36:30.578471 7 log.go:172] (0xc002686000) Data frame received for 3 I0404 00:36:30.578573 7 log.go:172] (0xc0013d00a0) (3) Data frame handling I0404 00:36:30.578661 7 log.go:172] (0xc0013d00a0) (3) Data frame sent I0404 00:36:30.579100 7 log.go:172] (0xc002686000) Data frame received for 3 I0404 00:36:30.579172 7 log.go:172] (0xc0013d00a0) (3) Data frame handling I0404 00:36:30.579230 7 log.go:172] (0xc002686000) Data frame received for 5 I0404 00:36:30.579261 7 log.go:172] (0xc0013d0140) (5) Data frame handling I0404 00:36:30.581586 7 log.go:172] (0xc002686000) Data frame received for 1 I0404 00:36:30.581618 7 log.go:172] (0xc001ec4be0) (1) Data frame handling I0404 00:36:30.581650 7 log.go:172] (0xc001ec4be0) (1) Data frame sent I0404 00:36:30.581819 7 log.go:172] (0xc002686000) (0xc001ec4be0) Stream removed, broadcasting: 1 I0404 00:36:30.581867 7 log.go:172] (0xc002686000) Go away received I0404 00:36:30.582039 7 log.go:172] (0xc002686000) (0xc001ec4be0) Stream removed, broadcasting: 1 I0404 00:36:30.582117 7 log.go:172] (0xc002686000) (0xc0013d00a0) Stream removed, broadcasting: 3 I0404 00:36:30.582147 7 log.go:172] (0xc002686000) (0xc0013d0140) Stream removed, broadcasting: 5 Apr 4 00:36:30.582: INFO: Waiting for responses: map[] Apr 4 00:36:30.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.115:8080/dial?request=hostname&protocol=udp&host=10.244.1.114&port=8081&tries=1'] Namespace:pod-network-test-3324 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 00:36:30.585: INFO: >>> kubeConfig: /root/.kube/config I0404 00:36:30.619442 7 log.go:172] (0xc002d20630) (0xc002af92c0) Create stream I0404 00:36:30.619471 7 log.go:172] (0xc002d20630) (0xc002af92c0) Stream added, broadcasting: 1 I0404 00:36:30.621388 7 log.go:172] (0xc002d20630) Reply frame received for 1 I0404 00:36:30.621418 7 log.go:172] (0xc002d20630) (0xc000fa8be0) Create stream I0404 00:36:30.621427 7 log.go:172] (0xc002d20630) (0xc000fa8be0) Stream added, broadcasting: 3 I0404 00:36:30.622310 7 log.go:172] (0xc002d20630) Reply frame received for 3 I0404 00:36:30.622340 7 log.go:172] (0xc002d20630) (0xc002af9540) Create stream I0404 00:36:30.622353 7 log.go:172] (0xc002d20630) (0xc002af9540) Stream added, broadcasting: 5 I0404 00:36:30.623173 7 log.go:172] (0xc002d20630) Reply frame received for 5 I0404 00:36:30.685350 7 log.go:172] (0xc002d20630) Data frame received for 3 I0404 00:36:30.685446 7 log.go:172] (0xc000fa8be0) (3) Data frame handling I0404 00:36:30.685535 7 log.go:172] (0xc000fa8be0) (3) Data frame sent I0404 00:36:30.686193 7 log.go:172] (0xc002d20630) Data frame received for 5 I0404 00:36:30.686233 7 log.go:172] (0xc002af9540) (5) Data frame handling I0404 00:36:30.686585 7 log.go:172] (0xc002d20630) Data frame received for 3 I0404 00:36:30.686612 7 log.go:172] (0xc000fa8be0) (3) Data frame handling I0404 00:36:30.688463 7 log.go:172] (0xc002d20630) Data frame received for 1 I0404 00:36:30.688492 7 log.go:172] (0xc002af92c0) (1) Data frame handling I0404 00:36:30.688527 7 log.go:172] (0xc002af92c0) (1) Data frame sent I0404 00:36:30.688668 7 log.go:172] (0xc002d20630) (0xc002af92c0) Stream removed, broadcasting: 1 I0404 00:36:30.688688 7 log.go:172] (0xc002d20630) Go away received I0404 00:36:30.688797 7 log.go:172] (0xc002d20630) (0xc002af92c0) Stream removed, broadcasting: 1 I0404 00:36:30.688832 7 log.go:172] (0xc002d20630) (0xc000fa8be0) Stream removed, broadcasting: 3 I0404 00:36:30.688857 7 log.go:172] (0xc002d20630) (0xc002af9540) Stream removed, broadcasting: 5 Apr 4 00:36:30.688: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:36:30.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3324" for this suite. • [SLOW TEST:28.682 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4206,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:36:30.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-1c39b099-302f-4785-a778-839a958ad5d0 STEP: Creating a pod to test consume configMaps Apr 4 00:36:30.801: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7" in namespace "configmap-9398" to be "Succeeded or Failed" Apr 4 00:36:30.809: INFO: Pod "pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345648ms Apr 4 00:36:32.813: INFO: Pod "pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012143458s Apr 4 00:36:34.818: INFO: Pod "pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016397626s STEP: Saw pod success Apr 4 00:36:34.818: INFO: Pod "pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7" satisfied condition "Succeeded or Failed" Apr 4 00:36:34.820: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7 container configmap-volume-test: STEP: delete the pod Apr 4 00:36:34.841: INFO: Waiting for pod pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7 to disappear Apr 4 00:36:34.847: INFO: Pod pod-configmaps-7fb97cc0-90eb-4164-9912-89ec6b0979f7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:36:34.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9398" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:36:34.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:36:34.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1734" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":238,"skipped":4260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:36:34.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 4 00:36:35.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9163' Apr 4 00:36:35.179: INFO: stderr: "" Apr 4 00:36:35.179: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Apr 4 00:36:35.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9163' Apr 4 00:36:42.744: INFO: stderr: "" Apr 4 00:36:42.744: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:36:42.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9163" for this suite. • [SLOW TEST:7.793 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":239,"skipped":4290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:36:42.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 4 00:36:47.370: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ac7b256d-a2c0-4ac7-aa40-3524aa5d0344" Apr 4 00:36:47.370: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ac7b256d-a2c0-4ac7-aa40-3524aa5d0344" in namespace "pods-8219" to be "terminated due to deadline exceeded" Apr 4 00:36:47.373: INFO: Pod "pod-update-activedeadlineseconds-ac7b256d-a2c0-4ac7-aa40-3524aa5d0344": Phase="Running", Reason="", readiness=true. Elapsed: 2.970306ms Apr 4 00:36:49.376: INFO: Pod "pod-update-activedeadlineseconds-ac7b256d-a2c0-4ac7-aa40-3524aa5d0344": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.006266888s Apr 4 00:36:49.376: INFO: Pod "pod-update-activedeadlineseconds-ac7b256d-a2c0-4ac7-aa40-3524aa5d0344" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:36:49.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8219" for this suite. • [SLOW TEST:6.616 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4319,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:36:49.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Apr 4 00:36:49.441: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:37:03.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5031" for this suite. • [SLOW TEST:14.281 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":241,"skipped":4331,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:37:03.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-895f308d-219c-4b20-a252-abd82f122304 STEP: Creating configMap with name cm-test-opt-upd-7244c3ee-85f4-4946-a7a3-f4a94400b38e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-895f308d-219c-4b20-a252-abd82f122304 STEP: Updating configmap cm-test-opt-upd-7244c3ee-85f4-4946-a7a3-f4a94400b38e STEP: Creating configMap with name cm-test-opt-create-bf06ebe5-5612-4945-9c78-f71ea87a4f36 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:38:40.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5883" for this suite. • [SLOW TEST:96.605 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4342,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:38:40.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-4a69207c-cd3c-4866-a3e0-3f1218ae167f STEP: Creating a pod to test consume configMaps Apr 4 00:38:40.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b" in namespace "configmap-2397" to be "Succeeded or Failed" Apr 4 00:38:40.387: INFO: Pod "pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.055347ms Apr 4 00:38:42.391: INFO: Pod "pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018073565s Apr 4 00:38:44.395: INFO: Pod "pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022033195s STEP: Saw pod success Apr 4 00:38:44.395: INFO: Pod "pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b" satisfied condition "Succeeded or Failed" Apr 4 00:38:44.399: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b container configmap-volume-test: STEP: delete the pod Apr 4 00:38:44.444: INFO: Waiting for pod pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b to disappear Apr 4 00:38:44.465: INFO: Pod pod-configmaps-c8336517-f7a7-4a73-af29-8494ecd7ef3b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:38:44.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2397" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4347,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:38:44.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-7dqs STEP: Creating a pod to test atomic-volume-subpath Apr 4 00:38:44.556: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7dqs" in namespace "subpath-4740" to be "Succeeded or Failed" Apr 4 00:38:44.575: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.571676ms Apr 4 00:38:46.579: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023031252s Apr 4 00:38:48.583: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 4.026909659s Apr 4 00:38:50.588: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 6.031741563s Apr 4 00:38:52.592: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 8.036050887s Apr 4 00:38:54.597: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 10.040194468s Apr 4 00:38:56.601: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 12.044665979s Apr 4 00:38:58.605: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 14.048973419s Apr 4 00:39:00.609: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 16.052673237s Apr 4 00:39:02.613: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 18.05699934s Apr 4 00:39:04.617: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 20.060726735s Apr 4 00:39:06.621: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Running", Reason="", readiness=true. Elapsed: 22.06480185s Apr 4 00:39:08.626: INFO: Pod "pod-subpath-test-projected-7dqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069179944s STEP: Saw pod success Apr 4 00:39:08.626: INFO: Pod "pod-subpath-test-projected-7dqs" satisfied condition "Succeeded or Failed" Apr 4 00:39:08.629: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-7dqs container test-container-subpath-projected-7dqs: STEP: delete the pod Apr 4 00:39:08.662: INFO: Waiting for pod pod-subpath-test-projected-7dqs to disappear Apr 4 00:39:08.689: INFO: Pod pod-subpath-test-projected-7dqs no longer exists STEP: Deleting pod pod-subpath-test-projected-7dqs Apr 4 00:39:08.689: INFO: Deleting pod "pod-subpath-test-projected-7dqs" in namespace "subpath-4740" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:39:08.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4740" for this suite. • [SLOW TEST:24.228 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":244,"skipped":4365,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:39:08.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-45448ecf-c4ca-46ff-95f6-30e5489a0343 in namespace container-probe-3550 Apr 4 00:39:12.769: INFO: Started pod busybox-45448ecf-c4ca-46ff-95f6-30e5489a0343 in namespace container-probe-3550 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 00:39:12.772: INFO: Initial restart count of pod busybox-45448ecf-c4ca-46ff-95f6-30e5489a0343 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:43:13.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3550" for this suite. • [SLOW TEST:244.722 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4374,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:43:13.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Apr 4 00:43:13.488: INFO: Waiting up to 5m0s for pod "pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135" in namespace "emptydir-2050" to be "Succeeded or Failed" Apr 4 00:43:13.505: INFO: Pod "pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135": Phase="Pending", Reason="", readiness=false. Elapsed: 17.238531ms Apr 4 00:43:15.509: INFO: Pod "pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021218111s Apr 4 00:43:17.512: INFO: Pod "pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024463498s STEP: Saw pod success Apr 4 00:43:17.513: INFO: Pod "pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135" satisfied condition "Succeeded or Failed" Apr 4 00:43:17.516: INFO: Trying to get logs from node latest-worker pod pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135 container test-container: STEP: delete the pod Apr 4 00:43:17.554: INFO: Waiting for pod pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135 to disappear Apr 4 00:43:17.558: INFO: Pod pod-db46a8ef-47ff-4794-ae48-7e8d1e5f9135 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:43:17.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2050" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4384,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:43:17.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:43:17.628: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 4 00:43:20.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 create -f -' Apr 4 00:43:23.792: INFO: stderr: "" Apr 4 00:43:23.792: INFO: stdout: "e2e-test-crd-publish-openapi-5462-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 4 00:43:23.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 delete e2e-test-crd-publish-openapi-5462-crds test-cr' Apr 4 00:43:23.918: INFO: stderr: "" Apr 4 00:43:23.918: INFO: stdout: "e2e-test-crd-publish-openapi-5462-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 4 00:43:23.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 apply -f -' Apr 4 00:43:24.180: INFO: stderr: "" Apr 4 00:43:24.180: INFO: stdout: "e2e-test-crd-publish-openapi-5462-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 4 00:43:24.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7670 delete e2e-test-crd-publish-openapi-5462-crds test-cr' Apr 4 00:43:24.281: INFO: stderr: "" Apr 4 00:43:24.281: INFO: stdout: "e2e-test-crd-publish-openapi-5462-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 4 00:43:24.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5462-crds' Apr 4 00:43:24.484: INFO: stderr: "" Apr 4 00:43:24.484: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5462-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:43:26.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7670" for this suite. • [SLOW TEST:8.786 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":247,"skipped":4395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:43:26.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:43:26.452: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:43:28.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Pending, waiting for it to be Running (with Ready = true) Apr 4 00:43:30.455: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:32.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:34.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:36.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:38.457: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:40.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:42.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:44.457: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:46.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:48.456: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:50.457: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = false) Apr 4 00:43:52.457: INFO: The status of Pod test-webserver-24f187ba-4c8f-4c23-b165-232e8560d1ab is Running (Ready = true) Apr 4 00:43:52.460: INFO: Container started at 2020-04-04 00:43:28 +0000 UTC, pod became ready at 2020-04-04 00:43:51 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:43:52.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7495" for this suite. • [SLOW TEST:26.117 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4437,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:43:52.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 4 00:43:52.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7409' Apr 4 00:43:52.690: INFO: stderr: "" Apr 4 00:43:52.690: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 4 00:43:57.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7409 -o json' Apr 4 00:43:57.921: INFO: stderr: "" Apr 4 00:43:57.921: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-04T00:43:52Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7409\",\n \"resourceVersion\": \"5212748\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7409/pods/e2e-test-httpd-pod\",\n \"uid\": \"54a386c5-9aef-4b65-a1b7-acba7607112c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-tx5vf\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-tx5vf\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-tx5vf\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T00:43:52Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T00:43:55Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T00:43:55Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T00:43:52Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://dee6b91244b9881f3e8792d4b53a75dbe71181310f52f1020ac7cadaca6cd6ea\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-04T00:43:54Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.118\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.118\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-04T00:43:52Z\"\n }\n}\n" STEP: replace the image in the pod Apr 4 00:43:57.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7409' Apr 4 00:43:58.330: INFO: stderr: "" Apr 4 00:43:58.330: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Apr 4 00:43:58.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7409' Apr 4 00:44:01.492: INFO: stderr: "" Apr 4 00:44:01.492: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:44:01.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7409" for this suite. • [SLOW TEST:9.037 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":249,"skipped":4438,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:44:01.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Apr 4 00:44:01.569: INFO: Waiting up to 5m0s for pod "downward-api-557d6eda-eb75-4ee7-b35e-97061469d778" in namespace "downward-api-9303" to be "Succeeded or Failed" Apr 4 00:44:01.573: INFO: Pod "downward-api-557d6eda-eb75-4ee7-b35e-97061469d778": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208274ms Apr 4 00:44:03.577: INFO: Pod "downward-api-557d6eda-eb75-4ee7-b35e-97061469d778": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00815662s Apr 4 00:44:05.581: INFO: Pod "downward-api-557d6eda-eb75-4ee7-b35e-97061469d778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011631681s STEP: Saw pod success Apr 4 00:44:05.581: INFO: Pod "downward-api-557d6eda-eb75-4ee7-b35e-97061469d778" satisfied condition "Succeeded or Failed" Apr 4 00:44:05.584: INFO: Trying to get logs from node latest-worker2 pod downward-api-557d6eda-eb75-4ee7-b35e-97061469d778 container dapi-container: STEP: delete the pod Apr 4 00:44:05.616: INFO: Waiting for pod downward-api-557d6eda-eb75-4ee7-b35e-97061469d778 to disappear Apr 4 00:44:05.621: INFO: Pod downward-api-557d6eda-eb75-4ee7-b35e-97061469d778 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:44:05.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9303" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:44:05.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-70d754e9-db24-41b9-aac0-4d675612ddcd STEP: Creating a pod to test consume configMaps Apr 4 00:44:05.888: INFO: Waiting up to 5m0s for pod "pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866" in namespace "configmap-3073" to be "Succeeded or Failed" Apr 4 00:44:05.897: INFO: Pod "pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866": Phase="Pending", Reason="", readiness=false. Elapsed: 8.590266ms Apr 4 00:44:07.906: INFO: Pod "pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017666378s Apr 4 00:44:09.910: INFO: Pod "pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022020188s STEP: Saw pod success Apr 4 00:44:09.911: INFO: Pod "pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866" satisfied condition "Succeeded or Failed" Apr 4 00:44:09.914: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866 container configmap-volume-test: STEP: delete the pod Apr 4 00:44:09.956: INFO: Waiting for pod pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866 to disappear Apr 4 00:44:09.960: INFO: Pod pod-configmaps-d1e69c6d-145f-414e-9d2b-1a8bb0b11866 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:44:09.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3073" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4478,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:44:09.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7466.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7466.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 00:44:14.129: INFO: DNS probes using dns-7466/dns-test-1899bbe2-2ab3-473d-9283-81ef062be472 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:44:14.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7466" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":252,"skipped":4495,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:44:14.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 00:44:14.261: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 00:44:14.503: INFO: Waiting for terminating namespaces to be deleted... Apr 4 00:44:14.507: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 00:44:14.512: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:44:14.512: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 00:44:14.512: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:44:14.512: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 00:44:14.512: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 00:44:14.516: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:44:14.516: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 00:44:14.516: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 00:44:14.516: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-455b3f93-3765-418b-a596-d906fdddc9d0 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-455b3f93-3765-418b-a596-d906fdddc9d0 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-455b3f93-3765-418b-a596-d906fdddc9d0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:49:22.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2894" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.481 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":253,"skipped":4496,"failed":0} [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:49:22.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-21e11cd1-567e-4e9d-a44b-6091338fedf8 STEP: Creating secret with name s-test-opt-upd-db046e68-10fa-4c5a-aa8a-3cbc048c6dfb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-21e11cd1-567e-4e9d-a44b-6091338fedf8 STEP: Updating secret s-test-opt-upd-db046e68-10fa-4c5a-aa8a-3cbc048c6dfb STEP: Creating secret with name s-test-opt-create-ac17dc1f-7a5d-47e4-83fa-2434f0b1e5fa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:50:33.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5808" for this suite. • [SLOW TEST:70.484 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4496,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:50:33.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Apr 4 00:50:33.239: INFO: Waiting up to 5m0s for pod "client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0" in namespace "containers-8064" to be "Succeeded or Failed" Apr 4 00:50:33.247: INFO: Pod "client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.872458ms Apr 4 00:50:35.250: INFO: Pod "client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011583631s Apr 4 00:50:37.254: INFO: Pod "client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015728538s STEP: Saw pod success Apr 4 00:50:37.254: INFO: Pod "client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0" satisfied condition "Succeeded or Failed" Apr 4 00:50:37.257: INFO: Trying to get logs from node latest-worker2 pod client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0 container test-container: STEP: delete the pod Apr 4 00:50:37.289: INFO: Waiting for pod client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0 to disappear Apr 4 00:50:37.294: INFO: Pod client-containers-f88c14b3-53ca-42e2-a23a-08379ca7b2a0 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:50:37.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8064" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4497,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:50:37.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:50:53.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-667" for this suite. • [SLOW TEST:16.389 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":256,"skipped":4499,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:50:53.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-a96b1e14-12ca-42de-9110-d80876283669 STEP: Creating a pod to test consume configMaps Apr 4 00:50:53.788: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170" in namespace "projected-2379" to be "Succeeded or Failed" Apr 4 00:50:53.792: INFO: Pod "pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170": Phase="Pending", Reason="", readiness=false. Elapsed: 3.405802ms Apr 4 00:50:55.796: INFO: Pod "pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007382265s Apr 4 00:50:57.800: INFO: Pod "pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011822968s STEP: Saw pod success Apr 4 00:50:57.800: INFO: Pod "pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170" satisfied condition "Succeeded or Failed" Apr 4 00:50:57.804: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170 container projected-configmap-volume-test: STEP: delete the pod Apr 4 00:50:57.823: INFO: Waiting for pod pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170 to disappear Apr 4 00:50:57.827: INFO: Pod pod-projected-configmaps-31c9cd46-6886-417a-8a0c-94522a10b170 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:50:57.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2379" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4521,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:50:57.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:50:57.961: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 4 00:50:57.968: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:50:57.971: INFO: Number of nodes with available pods: 0 Apr 4 00:50:57.971: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:50:59.011: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:50:59.014: INFO: Number of nodes with available pods: 0 Apr 4 00:50:59.014: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:50:59.976: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:50:59.979: INFO: Number of nodes with available pods: 0 Apr 4 00:50:59.979: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:51:00.976: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:00.979: INFO: Number of nodes with available pods: 1 Apr 4 00:51:00.979: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 00:51:01.976: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:01.979: INFO: Number of nodes with available pods: 2 Apr 4 00:51:01.979: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 4 00:51:02.023: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:02.023: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:02.038: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:03.046: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:03.046: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:03.050: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:04.043: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:04.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:04.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:05.042: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:05.042: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:05.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:06.043: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:06.043: INFO: Pod daemon-set-cqr6n is not available Apr 4 00:51:06.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:06.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:07.043: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:07.043: INFO: Pod daemon-set-cqr6n is not available Apr 4 00:51:07.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:07.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:08.043: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:08.043: INFO: Pod daemon-set-cqr6n is not available Apr 4 00:51:08.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:08.048: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:09.042: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:09.042: INFO: Pod daemon-set-cqr6n is not available Apr 4 00:51:09.042: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:09.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:10.041: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:10.041: INFO: Pod daemon-set-cqr6n is not available Apr 4 00:51:10.041: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:10.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:11.043: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:11.043: INFO: Pod daemon-set-cqr6n is not available Apr 4 00:51:11.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:11.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:12.043: INFO: Wrong image for pod: daemon-set-cqr6n. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:12.043: INFO: Pod daemon-set-cqr6n is not available Apr 4 00:51:12.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:12.048: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:13.058: INFO: Pod daemon-set-62hs5 is not available Apr 4 00:51:13.058: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:13.062: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:14.043: INFO: Pod daemon-set-62hs5 is not available Apr 4 00:51:14.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:14.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:15.043: INFO: Pod daemon-set-62hs5 is not available Apr 4 00:51:15.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:15.048: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:16.043: INFO: Pod daemon-set-62hs5 is not available Apr 4 00:51:16.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:16.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:17.042: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:17.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:18.043: INFO: Wrong image for pod: daemon-set-dhv2r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 00:51:18.043: INFO: Pod daemon-set-dhv2r is not available Apr 4 00:51:18.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:19.042: INFO: Pod daemon-set-vnsfl is not available Apr 4 00:51:19.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 4 00:51:19.050: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:19.052: INFO: Number of nodes with available pods: 1 Apr 4 00:51:19.052: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:51:20.058: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:20.061: INFO: Number of nodes with available pods: 1 Apr 4 00:51:20.061: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:51:21.057: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:21.060: INFO: Number of nodes with available pods: 1 Apr 4 00:51:21.060: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:51:22.058: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:51:22.061: INFO: Number of nodes with available pods: 2 Apr 4 00:51:22.061: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1595, will wait for the garbage collector to delete the pods Apr 4 00:51:22.134: INFO: Deleting DaemonSet.extensions daemon-set took: 6.876063ms Apr 4 00:51:24.234: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.100239743s Apr 4 00:51:33.037: INFO: Number of nodes with available pods: 0 Apr 4 00:51:33.037: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 00:51:33.039: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1595/daemonsets","resourceVersion":"5214474"},"items":null} Apr 4 00:51:33.041: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1595/pods","resourceVersion":"5214474"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:51:33.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1595" for this suite. • [SLOW TEST:35.224 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":258,"skipped":4525,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:51:33.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-fd099a16-f50e-4514-a0f2-d50ecc483220 STEP: Creating a pod to test consume configMaps Apr 4 00:51:33.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64" in namespace "projected-3401" to be "Succeeded or Failed" Apr 4 00:51:33.178: INFO: Pod "pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64": Phase="Pending", Reason="", readiness=false. Elapsed: 30.711472ms Apr 4 00:51:35.181: INFO: Pod "pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033927974s Apr 4 00:51:37.185: INFO: Pod "pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037886917s STEP: Saw pod success Apr 4 00:51:37.185: INFO: Pod "pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64" satisfied condition "Succeeded or Failed" Apr 4 00:51:37.188: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64 container projected-configmap-volume-test: STEP: delete the pod Apr 4 00:51:37.236: INFO: Waiting for pod pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64 to disappear Apr 4 00:51:37.246: INFO: Pod pod-projected-configmaps-f7fd6f70-242a-490a-b120-ba700ff9cb64 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:51:37.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3401" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4533,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:51:37.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:51:48.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6728" for this suite. • [SLOW TEST:11.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":260,"skipped":4534,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:51:48.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:51:49.027: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:51:51.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558309, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558309, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558309, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558309, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:51:54.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:52:06.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3443" for this suite. STEP: Destroying namespace "webhook-3443-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.978 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":261,"skipped":4538,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:52:06.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Apr 4 00:52:06.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-6061 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 4 00:52:06.474: INFO: stderr: "" Apr 4 00:52:06.474: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Apr 4 00:52:06.474: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 4 00:52:06.474: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6061" to be "running and ready, or succeeded" Apr 4 00:52:06.482: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.796042ms Apr 4 00:52:08.485: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011476709s Apr 4 00:52:10.489: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.01545259s Apr 4 00:52:10.489: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 4 00:52:10.489: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 4 00:52:10.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6061' Apr 4 00:52:10.592: INFO: stderr: "" Apr 4 00:52:10.592: INFO: stdout: "I0404 00:52:08.660991 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/pjbh 264\nI0404 00:52:08.861453 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/84p8 567\nI0404 00:52:09.061299 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/9w26 338\nI0404 00:52:09.261340 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vzg7 575\nI0404 00:52:09.461250 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/br4 235\nI0404 00:52:09.661326 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/x97 472\nI0404 00:52:09.861284 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/ggzj 371\nI0404 00:52:10.061293 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/rbq 215\nI0404 00:52:10.261272 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4mv 379\nI0404 00:52:10.461281 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/krw 390\n" STEP: limiting log lines Apr 4 00:52:10.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6061 --tail=1' Apr 4 00:52:10.700: INFO: stderr: "" Apr 4 00:52:10.700: INFO: stdout: "I0404 00:52:10.661321 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/79q 466\n" Apr 4 00:52:10.700: INFO: got output "I0404 00:52:10.661321 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/79q 466\n" STEP: limiting log bytes Apr 4 00:52:10.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6061 --limit-bytes=1' Apr 4 00:52:10.809: INFO: stderr: "" Apr 4 00:52:10.809: INFO: stdout: "I" Apr 4 00:52:10.809: INFO: got output "I" STEP: exposing timestamps Apr 4 00:52:10.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6061 --tail=1 --timestamps' Apr 4 00:52:10.925: INFO: stderr: "" Apr 4 00:52:10.925: INFO: stdout: "2020-04-04T00:52:10.861521257Z I0404 00:52:10.861366 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/zbw 588\n" Apr 4 00:52:10.925: INFO: got output "2020-04-04T00:52:10.861521257Z I0404 00:52:10.861366 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/zbw 588\n" STEP: restricting to a time range Apr 4 00:52:13.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6061 --since=1s' Apr 4 00:52:13.539: INFO: stderr: "" Apr 4 00:52:13.539: INFO: stdout: "I0404 00:52:12.661167 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/5bp 594\nI0404 00:52:12.861399 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/q25 384\nI0404 00:52:13.061303 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/wkk 355\nI0404 00:52:13.261306 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/sfpf 427\nI0404 00:52:13.461262 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/8rk 449\n" Apr 4 00:52:13.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6061 --since=24h' Apr 4 00:52:13.642: INFO: stderr: "" Apr 4 00:52:13.642: INFO: stdout: "I0404 00:52:08.660991 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/pjbh 264\nI0404 00:52:08.861453 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/84p8 567\nI0404 00:52:09.061299 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/9w26 338\nI0404 00:52:09.261340 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/vzg7 575\nI0404 00:52:09.461250 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/br4 235\nI0404 00:52:09.661326 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/x97 472\nI0404 00:52:09.861284 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/ggzj 371\nI0404 00:52:10.061293 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/rbq 215\nI0404 00:52:10.261272 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4mv 379\nI0404 00:52:10.461281 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/krw 390\nI0404 00:52:10.661321 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/79q 466\nI0404 00:52:10.861366 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/zbw 588\nI0404 00:52:11.061334 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/clq 299\nI0404 00:52:11.261275 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/trc5 272\nI0404 00:52:11.461165 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/cg67 256\nI0404 00:52:11.661367 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/xm4l 570\nI0404 00:52:11.861270 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/t8b 375\nI0404 00:52:12.061289 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4br 223\nI0404 00:52:12.261244 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/rbg 424\nI0404 00:52:12.461277 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/d8z2 526\nI0404 00:52:12.661167 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/5bp 594\nI0404 00:52:12.861399 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/q25 384\nI0404 00:52:13.061303 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/wkk 355\nI0404 00:52:13.261306 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/sfpf 427\nI0404 00:52:13.461262 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/8rk 449\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Apr 4 00:52:13.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6061' Apr 4 00:52:22.757: INFO: stderr: "" Apr 4 00:52:22.757: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:52:22.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6061" for this suite. • [SLOW TEST:16.422 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":262,"skipped":4543,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:52:22.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 4 00:52:30.894: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 00:52:30.901: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 00:52:32.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 00:52:32.906: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 00:52:34.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 00:52:34.906: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:52:34.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1019" for this suite. • [SLOW TEST:12.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4546,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:52:34.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 4 00:52:34.982: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214856 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:52:34.982: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214856 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 4 00:52:44.990: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214903 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:52:44.990: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214903 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 4 00:52:54.997: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214935 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:52:54.997: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214935 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 4 00:53:05.004: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214965 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:53:05.004: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-a b74ca827-b437-426e-9ea3-72139b889096 5214965 0 2020-04-04 00:52:34 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 4 00:53:15.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-b 23b94e9c-578d-4837-8967-029cfdd6adf2 5214995 0 2020-04-04 00:53:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:53:15.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-b 23b94e9c-578d-4837-8967-029cfdd6adf2 5214995 0 2020-04-04 00:53:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 4 00:53:25.018: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-b 23b94e9c-578d-4837-8967-029cfdd6adf2 5215024 0 2020-04-04 00:53:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 00:53:25.018: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8553 /api/v1/namespaces/watch-8553/configmaps/e2e-watch-test-configmap-b 23b94e9c-578d-4837-8967-029cfdd6adf2 5215024 0 2020-04-04 00:53:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:53:35.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8553" for this suite. • [SLOW TEST:60.112 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":264,"skipped":4554,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:53:35.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Apr 4 00:53:39.651: INFO: Successfully updated pod "annotationupdate16ef40f2-768f-4a83-a069-ca3795e0ff5f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:53:41.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3824" for this suite. • [SLOW TEST:6.659 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4563,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:53:41.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-fd4aeb00-a0d5-4199-897f-7a02698e5ce2 STEP: Creating a pod to test consume secrets Apr 4 00:53:41.780: INFO: Waiting up to 5m0s for pod "pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0" in namespace "secrets-1597" to be "Succeeded or Failed" Apr 4 00:53:41.797: INFO: Pod "pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.230819ms Apr 4 00:53:43.802: INFO: Pod "pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021273597s Apr 4 00:53:45.806: INFO: Pod "pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025385144s STEP: Saw pod success Apr 4 00:53:45.806: INFO: Pod "pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0" satisfied condition "Succeeded or Failed" Apr 4 00:53:45.808: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0 container secret-volume-test: STEP: delete the pod Apr 4 00:53:45.838: INFO: Waiting for pod pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0 to disappear Apr 4 00:53:45.843: INFO: Pod pod-secrets-cf1cc5cf-c202-4f06-80ac-59c2273c7ff0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:53:45.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1597" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4578,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:53:45.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-e3388b6d-a439-4b80-8177-05f13f091891 STEP: Creating a pod to test consume secrets Apr 4 00:53:45.941: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf" in namespace "projected-9955" to be "Succeeded or Failed" Apr 4 00:53:45.966: INFO: Pod "pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.43584ms Apr 4 00:53:47.970: INFO: Pod "pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02856028s Apr 4 00:53:49.974: INFO: Pod "pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03265714s STEP: Saw pod success Apr 4 00:53:49.974: INFO: Pod "pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf" satisfied condition "Succeeded or Failed" Apr 4 00:53:49.977: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf container projected-secret-volume-test: STEP: delete the pod Apr 4 00:53:50.011: INFO: Waiting for pod pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf to disappear Apr 4 00:53:50.023: INFO: Pod pod-projected-secrets-82ba7c98-57fd-4834-8402-d47d060593bf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:53:50.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9955" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4580,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:53:50.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:53:50.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4423" for this suite. STEP: Destroying namespace "nspatchtest-ca74488b-81f3-4974-9ca4-4acf9dc66d2b-8140" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":268,"skipped":4601,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:53:50.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Apr 4 00:53:50.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e" in namespace "projected-4859" to be "Succeeded or Failed" Apr 4 00:53:50.291: INFO: Pod "downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.59026ms Apr 4 00:53:52.295: INFO: Pod "downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018900261s Apr 4 00:53:54.300: INFO: Pod "downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023491469s STEP: Saw pod success Apr 4 00:53:54.300: INFO: Pod "downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e" satisfied condition "Succeeded or Failed" Apr 4 00:53:54.303: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e container client-container: STEP: delete the pod Apr 4 00:53:54.340: INFO: Waiting for pod downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e to disappear Apr 4 00:53:54.352: INFO: Pod downwardapi-volume-950826b0-8412-4cf8-9a93-9a121af1524e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:53:54.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4859" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4603,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:53:54.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Apr 4 00:53:54.978: INFO: created pod pod-service-account-defaultsa Apr 4 00:53:54.978: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 4 00:53:54.994: INFO: created pod pod-service-account-mountsa Apr 4 00:53:54.994: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 4 00:53:55.022: INFO: created pod pod-service-account-nomountsa Apr 4 00:53:55.022: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 4 00:53:55.048: INFO: created pod pod-service-account-defaultsa-mountspec Apr 4 00:53:55.048: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 4 00:53:55.128: INFO: created pod pod-service-account-mountsa-mountspec Apr 4 00:53:55.128: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 4 00:53:55.141: INFO: created pod pod-service-account-nomountsa-mountspec Apr 4 00:53:55.141: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 4 00:53:55.155: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 4 00:53:55.155: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 4 00:53:55.205: INFO: created pod pod-service-account-mountsa-nomountspec Apr 4 00:53:55.205: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 4 00:53:55.258: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 4 00:53:55.259: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:53:55.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5190" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":270,"skipped":4611,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:53:55.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 4 00:53:55.520: INFO: Create a RollingUpdate DaemonSet Apr 4 00:53:55.526: INFO: Check that daemon pods launch on every node of the cluster Apr 4 00:53:55.552: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:53:55.642: INFO: Number of nodes with available pods: 0 Apr 4 00:53:55.642: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:53:56.647: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:53:56.651: INFO: Number of nodes with available pods: 0 Apr 4 00:53:56.651: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:53:57.648: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:53:57.651: INFO: Number of nodes with available pods: 0 Apr 4 00:53:57.651: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:53:58.645: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:53:58.647: INFO: Number of nodes with available pods: 0 Apr 4 00:53:58.647: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:53:59.959: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:00.395: INFO: Number of nodes with available pods: 0 Apr 4 00:54:00.396: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:00.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:00.729: INFO: Number of nodes with available pods: 0 Apr 4 00:54:00.729: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:01.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:01.868: INFO: Number of nodes with available pods: 0 Apr 4 00:54:01.868: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:02.696: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:02.893: INFO: Number of nodes with available pods: 0 Apr 4 00:54:02.893: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:03.696: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:03.743: INFO: Number of nodes with available pods: 0 Apr 4 00:54:03.743: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:05.375: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:05.683: INFO: Number of nodes with available pods: 0 Apr 4 00:54:05.683: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:06.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:06.685: INFO: Number of nodes with available pods: 1 Apr 4 00:54:06.685: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 00:54:07.647: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:07.650: INFO: Number of nodes with available pods: 2 Apr 4 00:54:07.650: INFO: Number of running nodes: 2, number of available pods: 2 Apr 4 00:54:07.650: INFO: Update the DaemonSet to trigger a rollout Apr 4 00:54:07.657: INFO: Updating DaemonSet daemon-set Apr 4 00:54:13.713: INFO: Roll back the DaemonSet before rollout is complete Apr 4 00:54:13.723: INFO: Updating DaemonSet daemon-set Apr 4 00:54:13.723: INFO: Make sure DaemonSet rollback is complete Apr 4 00:54:13.730: INFO: Wrong image for pod: daemon-set-knf9t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 4 00:54:13.730: INFO: Pod daemon-set-knf9t is not available Apr 4 00:54:13.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:14.756: INFO: Wrong image for pod: daemon-set-knf9t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 4 00:54:14.756: INFO: Pod daemon-set-knf9t is not available Apr 4 00:54:14.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:15.755: INFO: Pod daemon-set-56qxd is not available Apr 4 00:54:15.758: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8825, will wait for the garbage collector to delete the pods Apr 4 00:54:15.819: INFO: Deleting DaemonSet.extensions daemon-set took: 4.487283ms Apr 4 00:54:16.120: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.202886ms Apr 4 00:54:22.823: INFO: Number of nodes with available pods: 0 Apr 4 00:54:22.823: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 00:54:22.826: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8825/daemonsets","resourceVersion":"5215486"},"items":null} Apr 4 00:54:22.829: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8825/pods","resourceVersion":"5215486"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:54:22.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8825" for this suite. • [SLOW TEST:27.460 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":271,"skipped":4629,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:54:22.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 00:54:23.864: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 00:54:25.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558463, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558463, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558463, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721558463, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 00:54:29.013: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:54:29.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3217" for this suite. STEP: Destroying namespace "webhook-3217-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.568 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":272,"skipped":4630,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:54:29.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:54:29.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5118" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":273,"skipped":4642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:54:29.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 4 00:54:29.666: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:29.698: INFO: Number of nodes with available pods: 0 Apr 4 00:54:29.698: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:30.702: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:30.705: INFO: Number of nodes with available pods: 0 Apr 4 00:54:30.705: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:31.798: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:31.800: INFO: Number of nodes with available pods: 0 Apr 4 00:54:31.800: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:32.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:32.957: INFO: Number of nodes with available pods: 1 Apr 4 00:54:32.957: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:33.703: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:33.707: INFO: Number of nodes with available pods: 1 Apr 4 00:54:33.707: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:34.701: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:34.704: INFO: Number of nodes with available pods: 2 Apr 4 00:54:34.704: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 4 00:54:34.742: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:34.745: INFO: Number of nodes with available pods: 1 Apr 4 00:54:34.745: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:35.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:35.752: INFO: Number of nodes with available pods: 1 Apr 4 00:54:35.752: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:36.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:36.822: INFO: Number of nodes with available pods: 1 Apr 4 00:54:36.822: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:37.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:37.753: INFO: Number of nodes with available pods: 1 Apr 4 00:54:37.754: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:38.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:38.752: INFO: Number of nodes with available pods: 1 Apr 4 00:54:38.752: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:39.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:39.752: INFO: Number of nodes with available pods: 1 Apr 4 00:54:39.752: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:40.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:40.754: INFO: Number of nodes with available pods: 1 Apr 4 00:54:40.754: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:41.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:41.754: INFO: Number of nodes with available pods: 1 Apr 4 00:54:41.755: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:42.780: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:42.785: INFO: Number of nodes with available pods: 1 Apr 4 00:54:42.785: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:43.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:43.754: INFO: Number of nodes with available pods: 1 Apr 4 00:54:43.754: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:44.752: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:44.768: INFO: Number of nodes with available pods: 1 Apr 4 00:54:44.768: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:45.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:45.754: INFO: Number of nodes with available pods: 1 Apr 4 00:54:45.754: INFO: Node latest-worker is running more than one daemon pod Apr 4 00:54:46.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 00:54:46.755: INFO: Number of nodes with available pods: 2 Apr 4 00:54:46.755: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7095, will wait for the garbage collector to delete the pods Apr 4 00:54:46.818: INFO: Deleting DaemonSet.extensions daemon-set took: 6.317033ms Apr 4 00:54:46.918: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.257917ms Apr 4 00:54:53.021: INFO: Number of nodes with available pods: 0 Apr 4 00:54:53.021: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 00:54:53.024: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7095/daemonsets","resourceVersion":"5215744"},"items":null} Apr 4 00:54:53.026: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7095/pods","resourceVersion":"5215744"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:54:53.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7095" for this suite. • [SLOW TEST:23.496 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":274,"skipped":4674,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 4 00:54:53.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 4 00:54:53.131: INFO: Waiting up to 5m0s for pod "pod-d573114b-25ce-4d0a-893b-6092cd72291e" in namespace "emptydir-6494" to be "Succeeded or Failed" Apr 4 00:54:53.134: INFO: Pod "pod-d573114b-25ce-4d0a-893b-6092cd72291e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.472093ms Apr 4 00:54:55.139: INFO: Pod "pod-d573114b-25ce-4d0a-893b-6092cd72291e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008138054s Apr 4 00:54:57.143: INFO: Pod "pod-d573114b-25ce-4d0a-893b-6092cd72291e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01257665s STEP: Saw pod success Apr 4 00:54:57.143: INFO: Pod "pod-d573114b-25ce-4d0a-893b-6092cd72291e" satisfied condition "Succeeded or Failed" Apr 4 00:54:57.147: INFO: Trying to get logs from node latest-worker pod pod-d573114b-25ce-4d0a-893b-6092cd72291e container test-container: STEP: delete the pod Apr 4 00:54:57.163: INFO: Waiting for pod pod-d573114b-25ce-4d0a-893b-6092cd72291e to disappear Apr 4 00:54:57.168: INFO: Pod pod-d573114b-25ce-4d0a-893b-6092cd72291e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 4 00:54:57.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6494" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 4 00:54:57.176: INFO: Running AfterSuite actions on all nodes Apr 4 00:54:57.176: INFO: Running AfterSuite actions on node 1 Apr 4 00:54:57.176: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4642.514 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS