I0406 21:06:22.083251 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0406 21:06:22.083514 6 e2e.go:109] Starting e2e run "e3428125-b2df-4352-968e-00ca0ce59725" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586207181 - Will randomize all specs Will run 278 of 4842 specs Apr 6 21:06:22.147: INFO: >>> kubeConfig: /root/.kube/config Apr 6 21:06:22.152: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 6 21:06:22.180: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 6 21:06:22.216: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 6 21:06:22.216: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 6 21:06:22.216: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 6 21:06:22.227: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 6 21:06:22.227: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 6 21:06:22.227: INFO: e2e test version: v1.17.4 Apr 6 21:06:22.228: INFO: kube-apiserver version: v1.17.2 Apr 6 21:06:22.228: INFO: >>> kubeConfig: /root/.kube/config Apr 6 21:06:22.233: INFO: Cluster IP family: ipv4 [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:06:22.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Apr 6 21:06:22.289: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 6 21:06:22.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5193' Apr 6 21:06:24.886: INFO: stderr: "" Apr 6 21:06:24.886: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 6 21:06:25.891: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:06:25.891: INFO: Found 0 / 1 Apr 6 21:06:26.891: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:06:26.891: INFO: Found 0 / 1 Apr 6 21:06:27.891: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:06:27.891: INFO: Found 0 / 1 Apr 6 21:06:28.891: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:06:28.891: INFO: Found 1 / 1 Apr 6 21:06:28.891: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 6 21:06:28.894: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:06:28.895: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 6 21:06:28.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-d4bfm --namespace=kubectl-5193 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 6 21:06:29.001: INFO: stderr: "" Apr 6 21:06:29.001: INFO: stdout: "pod/agnhost-master-d4bfm patched\n" STEP: checking annotations Apr 6 21:06:29.018: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:06:29.018: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:06:29.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5193" for this suite. • [SLOW TEST:6.794 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:06:29.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:06:29.958: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:06:32.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721803989, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721803989, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721803990, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721803989, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:06:35.100: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:06:35.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3514" for this suite. STEP: Destroying namespace "webhook-3514-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.470 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":2,"skipped":64,"failed":0} [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:06:35.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 6 21:06:35.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-503 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 6 21:06:35.694: INFO: stderr: "" Apr 6 21:06:35.694: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 6 21:06:35.694: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 6 21:06:35.694: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-503" to be "running and ready, or succeeded" Apr 6 21:06:35.755: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 60.572362ms Apr 6 21:06:37.759: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0643868s Apr 6 21:06:39.763: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.068488858s Apr 6 21:06:39.763: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 6 21:06:39.763: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 6 21:06:39.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-503' Apr 6 21:06:39.884: INFO: stderr: "" Apr 6 21:06:39.884: INFO: stdout: "I0406 21:06:38.604725 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/kmqm 420\nI0406 21:06:38.804904 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/7ch 218\nI0406 21:06:39.004912 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/n42d 390\nI0406 21:06:39.204884 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/4gq 540\nI0406 21:06:39.404974 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/xzbf 283\nI0406 21:06:39.604902 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nt9w 289\nI0406 21:06:39.804882 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/27r 270\n" STEP: limiting log lines Apr 6 21:06:39.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-503 --tail=1' Apr 6 21:06:39.995: INFO: stderr: "" Apr 6 21:06:39.995: INFO: stdout: "I0406 21:06:39.804882 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/27r 270\n" Apr 6 21:06:39.995: INFO: got output "I0406 21:06:39.804882 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/27r 270\n" STEP: limiting log bytes Apr 6 21:06:39.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-503 --limit-bytes=1' Apr 6 21:06:40.106: INFO: stderr: "" Apr 6 21:06:40.106: INFO: stdout: "I" Apr 6 21:06:40.106: INFO: got output "I" STEP: exposing timestamps Apr 6 21:06:40.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-503 --tail=1 --timestamps' Apr 6 21:06:40.223: INFO: stderr: "" Apr 6 21:06:40.223: INFO: stdout: "2020-04-06T21:06:40.204997928Z I0406 21:06:40.204865 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/ffsk 208\n" Apr 6 21:06:40.223: INFO: got output "2020-04-06T21:06:40.204997928Z I0406 21:06:40.204865 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/ffsk 208\n" STEP: restricting to a time range Apr 6 21:06:42.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-503 --since=1s' Apr 6 21:06:42.848: INFO: stderr: "" Apr 6 21:06:42.848: INFO: stdout: "I0406 21:06:42.004877 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/g66 517\nI0406 21:06:42.204843 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/8n9 205\nI0406 21:06:42.404941 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/7vlp 314\nI0406 21:06:42.604919 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/ppxk 454\nI0406 21:06:42.804904 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/2pz 552\n" Apr 6 21:06:42.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-503 --since=24h' Apr 6 21:06:42.966: INFO: stderr: "" Apr 6 21:06:42.966: INFO: stdout: "I0406 21:06:38.604725 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/kmqm 420\nI0406 21:06:38.804904 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/7ch 218\nI0406 21:06:39.004912 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/n42d 390\nI0406 21:06:39.204884 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/4gq 540\nI0406 21:06:39.404974 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/xzbf 283\nI0406 21:06:39.604902 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nt9w 289\nI0406 21:06:39.804882 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/27r 270\nI0406 21:06:40.004874 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/sbw 279\nI0406 21:06:40.204865 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/ffsk 208\nI0406 21:06:40.404902 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/p7ct 520\nI0406 21:06:40.604960 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/l4g 483\nI0406 21:06:40.804899 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/f9r4 370\nI0406 21:06:41.004907 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/lq8 516\nI0406 21:06:41.204894 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/f6vs 320\nI0406 21:06:41.404929 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/46w 578\nI0406 21:06:41.604892 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/gk8 263\nI0406 21:06:41.804898 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/thff 240\nI0406 21:06:42.004877 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/g66 517\nI0406 21:06:42.204843 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/8n9 205\nI0406 21:06:42.404941 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/7vlp 314\nI0406 21:06:42.604919 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/ppxk 454\nI0406 21:06:42.804904 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/2pz 552\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 6 21:06:42.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-503' Apr 6 21:06:45.494: INFO: stderr: "" Apr 6 21:06:45.494: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:06:45.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-503" for this suite. • [SLOW TEST:10.006 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":3,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:06:45.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:06:58.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2722" for this suite. • [SLOW TEST:13.184 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":4,"skipped":120,"failed":0} [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:06:58.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 6 21:06:58.751: INFO: Waiting up to 5m0s for pod "downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1" in namespace "downward-api-4140" to be "success or failure" Apr 6 21:06:58.794: INFO: Pod "downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1": Phase="Pending", Reason="", readiness=false. Elapsed: 42.832144ms Apr 6 21:07:00.800: INFO: Pod "downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048469986s Apr 6 21:07:02.803: INFO: Pod "downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052304161s STEP: Saw pod success Apr 6 21:07:02.804: INFO: Pod "downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1" satisfied condition "success or failure" Apr 6 21:07:02.806: INFO: Trying to get logs from node jerma-worker pod downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1 container dapi-container: STEP: delete the pod Apr 6 21:07:02.833: INFO: Waiting for pod downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1 to disappear Apr 6 21:07:02.866: INFO: Pod downward-api-2ca73df6-d9dc-48ed-988e-1c353373b9c1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:07:02.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4140" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":120,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:07:02.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:07:02.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b" in namespace "projected-4931" to be "success or failure" Apr 6 21:07:02.943: INFO: Pod "downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.030586ms Apr 6 21:07:04.947: INFO: Pod "downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007174924s Apr 6 21:07:06.951: INFO: Pod "downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011335915s STEP: Saw pod success Apr 6 21:07:06.951: INFO: Pod "downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b" satisfied condition "success or failure" Apr 6 21:07:06.955: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b container client-container: STEP: delete the pod Apr 6 21:07:07.034: INFO: Waiting for pod downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b to disappear Apr 6 21:07:07.044: INFO: Pod downwardapi-volume-1f76f9b0-0629-4054-8e50-3e40522cd23b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:07:07.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4931" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":127,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:07:07.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 6 21:07:11.193: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:07:11.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2882" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:07:11.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:07:22.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3086" for this suite. • [SLOW TEST:11.141 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":8,"skipped":166,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:07:22.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 6 21:07:22.452: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 6 21:07:27.482: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:07:28.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6855" for this suite. • [SLOW TEST:6.118 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":9,"skipped":182,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:07:28.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 6 21:07:28.608: INFO: Waiting up to 5m0s for pod "var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a" in namespace "var-expansion-2085" to be "success or failure" Apr 6 21:07:28.611: INFO: Pod "var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.347554ms Apr 6 21:07:30.630: INFO: Pod "var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022243311s Apr 6 21:07:32.634: INFO: Pod "var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025886267s STEP: Saw pod success Apr 6 21:07:32.634: INFO: Pod "var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a" satisfied condition "success or failure" Apr 6 21:07:32.637: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a container dapi-container: STEP: delete the pod Apr 6 21:07:32.826: INFO: Waiting for pod var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a to disappear Apr 6 21:07:32.892: INFO: Pod var-expansion-d9ca864e-fc76-4071-914a-04a5e32db43a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:07:32.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2085" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":183,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:07:32.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 6 21:07:34.168: INFO: Pod name wrapped-volume-race-aab37333-63af-40b2-9520-14f63e228644: Found 0 pods out of 5 Apr 6 21:07:39.186: INFO: Pod name wrapped-volume-race-aab37333-63af-40b2-9520-14f63e228644: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aab37333-63af-40b2-9520-14f63e228644 in namespace emptydir-wrapper-7323, will wait for the garbage collector to delete the pods Apr 6 21:07:51.270: INFO: Deleting ReplicationController wrapped-volume-race-aab37333-63af-40b2-9520-14f63e228644 took: 7.975403ms Apr 6 21:07:51.670: INFO: Terminating ReplicationController wrapped-volume-race-aab37333-63af-40b2-9520-14f63e228644 pods took: 400.356267ms STEP: Creating RC which spawns configmap-volume pods Apr 6 21:08:00.599: INFO: Pod name wrapped-volume-race-70be746c-dd56-4df7-8250-b27d75d7f0a0: Found 0 pods out of 5 Apr 6 21:08:05.607: INFO: Pod name wrapped-volume-race-70be746c-dd56-4df7-8250-b27d75d7f0a0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-70be746c-dd56-4df7-8250-b27d75d7f0a0 in namespace emptydir-wrapper-7323, will wait for the garbage collector to delete the pods Apr 6 21:08:17.758: INFO: Deleting ReplicationController wrapped-volume-race-70be746c-dd56-4df7-8250-b27d75d7f0a0 took: 8.470738ms Apr 6 21:08:18.158: INFO: Terminating ReplicationController wrapped-volume-race-70be746c-dd56-4df7-8250-b27d75d7f0a0 pods took: 400.271822ms STEP: Creating RC which spawns configmap-volume pods Apr 6 21:08:30.340: INFO: Pod name wrapped-volume-race-0d9d0e44-8dcd-4490-b98a-04a588b4f0b6: Found 0 pods out of 5 Apr 6 21:08:35.347: INFO: Pod name wrapped-volume-race-0d9d0e44-8dcd-4490-b98a-04a588b4f0b6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0d9d0e44-8dcd-4490-b98a-04a588b4f0b6 in namespace emptydir-wrapper-7323, will wait for the garbage collector to delete the pods Apr 6 21:08:49.446: INFO: Deleting ReplicationController wrapped-volume-race-0d9d0e44-8dcd-4490-b98a-04a588b4f0b6 took: 29.079904ms Apr 6 21:08:49.746: INFO: Terminating ReplicationController wrapped-volume-race-0d9d0e44-8dcd-4490-b98a-04a588b4f0b6 pods took: 300.307621ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:00.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7323" for this suite. • [SLOW TEST:87.528 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":11,"skipped":193,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:00.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:09:00.929: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:09:02.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804140, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804140, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804141, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804140, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:09:05.986: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 6 21:09:06.110: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:06.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1472" for this suite. STEP: Destroying namespace "webhook-1472-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.951 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":12,"skipped":196,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:06.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 6 21:09:06.573: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5885" to be "success or failure" Apr 6 21:09:06.921: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 348.107268ms Apr 6 21:09:08.946: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372430198s Apr 6 21:09:10.962: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.389041788s Apr 6 21:09:12.967: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.393645131s STEP: Saw pod success Apr 6 21:09:12.967: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 6 21:09:12.970: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 6 21:09:12.994: INFO: Waiting for pod pod-host-path-test to disappear Apr 6 21:09:13.040: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:13.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5885" for this suite. • [SLOW TEST:6.668 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:13.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:29.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9493" for this suite. • [SLOW TEST:16.114 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":14,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:29.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5ba3b515-d4e5-4511-8413-fa68eac279e6 STEP: Creating a pod to test consume configMaps Apr 6 21:09:29.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653" in namespace "configmap-5669" to be "success or failure" Apr 6 21:09:29.290: INFO: Pod "pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653": Phase="Pending", Reason="", readiness=false. Elapsed: 12.569737ms Apr 6 21:09:31.294: INFO: Pod "pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016543823s Apr 6 21:09:33.298: INFO: Pod "pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020819912s STEP: Saw pod success Apr 6 21:09:33.298: INFO: Pod "pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653" satisfied condition "success or failure" Apr 6 21:09:33.302: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653 container configmap-volume-test: STEP: delete the pod Apr 6 21:09:33.320: INFO: Waiting for pod pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653 to disappear Apr 6 21:09:33.351: INFO: Pod pod-configmaps-984fb9da-78e8-496e-8973-60b13af45653 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:33.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5669" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":266,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:33.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 6 21:09:40.398: INFO: 0 pods remaining Apr 6 21:09:40.398: INFO: 0 pods has nil DeletionTimestamp Apr 6 21:09:40.398: INFO: Apr 6 21:09:40.884: INFO: 0 pods remaining Apr 6 21:09:40.884: INFO: 0 pods has nil DeletionTimestamp Apr 6 21:09:40.884: INFO: STEP: Gathering metrics W0406 21:09:42.336575 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 6 21:09:42.336: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:42.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1724" for this suite. • [SLOW TEST:9.400 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":16,"skipped":266,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:42.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 6 21:09:43.284: INFO: Waiting up to 5m0s for pod "pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35" in namespace "emptydir-9167" to be "success or failure" Apr 6 21:09:43.308: INFO: Pod "pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35": Phase="Pending", Reason="", readiness=false. Elapsed: 23.4874ms Apr 6 21:09:45.311: INFO: Pod "pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026743816s Apr 6 21:09:47.315: INFO: Pod "pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03086307s STEP: Saw pod success Apr 6 21:09:47.315: INFO: Pod "pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35" satisfied condition "success or failure" Apr 6 21:09:47.318: INFO: Trying to get logs from node jerma-worker2 pod pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35 container test-container: STEP: delete the pod Apr 6 21:09:47.394: INFO: Waiting for pod pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35 to disappear Apr 6 21:09:47.398: INFO: Pod pod-98fdbfac-e4b6-42ce-a6db-5da3c8d93b35 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:47.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9167" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":272,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:47.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-661c1b8c-ed12-49be-ac08-50db5584f536 STEP: Creating a pod to test consume secrets Apr 6 21:09:47.484: INFO: Waiting up to 5m0s for pod "pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379" in namespace "secrets-167" to be "success or failure" Apr 6 21:09:47.487: INFO: Pod "pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.575367ms Apr 6 21:09:49.492: INFO: Pod "pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007101679s Apr 6 21:09:51.502: INFO: Pod "pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017685653s STEP: Saw pod success Apr 6 21:09:51.502: INFO: Pod "pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379" satisfied condition "success or failure" Apr 6 21:09:51.505: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379 container secret-env-test: STEP: delete the pod Apr 6 21:09:51.527: INFO: Waiting for pod pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379 to disappear Apr 6 21:09:51.554: INFO: Pod pod-secrets-757df663-4ccb-45b1-847c-02ebdc3c2379 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:09:51.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-167" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:09:51.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d3b97667-fa98-475a-9f92-e93d7b3af095 STEP: Creating configMap with name cm-test-opt-upd-fbec191e-209c-4998-bf5f-472ed4c62f84 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d3b97667-fa98-475a-9f92-e93d7b3af095 STEP: Updating configmap cm-test-opt-upd-fbec191e-209c-4998-bf5f-472ed4c62f84 STEP: Creating configMap with name cm-test-opt-create-5fcf8443-9574-431a-b534-22fed47c9b29 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:11:18.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9358" for this suite. • [SLOW TEST:86.624 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:11:18.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-v9tn STEP: Creating a pod to test atomic-volume-subpath Apr 6 21:11:18.307: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-v9tn" in namespace "subpath-8846" to be "success or failure" Apr 6 21:11:18.316: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7894ms Apr 6 21:11:20.324: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016790645s Apr 6 21:11:22.328: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 4.020978281s Apr 6 21:11:24.333: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 6.025639337s Apr 6 21:11:26.337: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 8.030140557s Apr 6 21:11:28.341: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 10.033971394s Apr 6 21:11:30.345: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 12.038225935s Apr 6 21:11:32.349: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 14.042029424s Apr 6 21:11:34.354: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 16.046361193s Apr 6 21:11:36.358: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 18.050601662s Apr 6 21:11:38.362: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 20.054796403s Apr 6 21:11:40.366: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Running", Reason="", readiness=true. Elapsed: 22.058945456s Apr 6 21:11:42.370: INFO: Pod "pod-subpath-test-downwardapi-v9tn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062645786s STEP: Saw pod success Apr 6 21:11:42.370: INFO: Pod "pod-subpath-test-downwardapi-v9tn" satisfied condition "success or failure" Apr 6 21:11:42.373: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-v9tn container test-container-subpath-downwardapi-v9tn: STEP: delete the pod Apr 6 21:11:42.414: INFO: Waiting for pod pod-subpath-test-downwardapi-v9tn to disappear Apr 6 21:11:42.438: INFO: Pod pod-subpath-test-downwardapi-v9tn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-v9tn Apr 6 21:11:42.438: INFO: Deleting pod "pod-subpath-test-downwardapi-v9tn" in namespace "subpath-8846" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:11:42.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8846" for this suite. • [SLOW TEST:24.254 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":20,"skipped":336,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:11:42.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-98a774bb-1eeb-44f8-b3e6-ceca8b9cfa60 STEP: Creating a pod to test consume secrets Apr 6 21:11:42.617: INFO: Waiting up to 5m0s for pod "pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481" in namespace "secrets-4334" to be "success or failure" Apr 6 21:11:42.623: INFO: Pod "pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481": Phase="Pending", Reason="", readiness=false. Elapsed: 5.908764ms Apr 6 21:11:44.628: INFO: Pod "pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010522559s Apr 6 21:11:46.642: INFO: Pod "pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025058921s STEP: Saw pod success Apr 6 21:11:46.642: INFO: Pod "pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481" satisfied condition "success or failure" Apr 6 21:11:46.645: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481 container secret-volume-test: STEP: delete the pod Apr 6 21:11:46.665: INFO: Waiting for pod pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481 to disappear Apr 6 21:11:46.670: INFO: Pod pod-secrets-1a1365da-c190-40ba-8f7b-fe5f36e81481 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:11:46.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4334" for this suite. STEP: Destroying namespace "secret-namespace-829" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":352,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:11:46.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 6 21:11:46.731: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:11:53.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8273" for this suite. • [SLOW TEST:6.501 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":22,"skipped":366,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:11:53.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:11:53.849: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:11:55.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804313, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804313, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804313, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721804313, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:11:58.911: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:11:58.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1364" for this suite. STEP: Destroying namespace "webhook-1364-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.895 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":23,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:11:59.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7093.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 21:12:05.396: INFO: DNS probes using dns-7093/dns-test-9322057e-42a1-4dad-8349-a96ee7e7cde4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:12:05.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7093" for this suite. • [SLOW TEST:6.557 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":24,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:12:05.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:12:10.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4806" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":25,"skipped":449,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:12:10.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-099d5835-36d6-4bf1-86d8-3b7d7eb31b3e STEP: Creating a pod to test consume secrets Apr 6 21:12:10.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9" in namespace "projected-1138" to be "success or failure" Apr 6 21:12:10.517: INFO: Pod "pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9": Phase="Pending", Reason="", readiness=false. Elapsed: 77.964278ms Apr 6 21:12:12.521: INFO: Pod "pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081472673s Apr 6 21:12:14.525: INFO: Pod "pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085754239s STEP: Saw pod success Apr 6 21:12:14.525: INFO: Pod "pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9" satisfied condition "success or failure" Apr 6 21:12:14.528: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9 container secret-volume-test: STEP: delete the pod Apr 6 21:12:14.561: INFO: Waiting for pod pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9 to disappear Apr 6 21:12:14.576: INFO: Pod pod-projected-secrets-548329ec-87a4-4287-8a3d-7b667cbcefd9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:12:14.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1138" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":459,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:12:14.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0406 21:12:24.678760 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 6 21:12:24.678: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:12:24.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1741" for this suite. • [SLOW TEST:10.086 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":27,"skipped":478,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:12:24.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1574, will wait for the garbage collector to delete the pods Apr 6 21:12:28.857: INFO: Deleting Job.batch foo took: 4.880607ms Apr 6 21:12:28.958: INFO: Terminating Job.batch foo pods took: 100.246789ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:09.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1574" for this suite. • [SLOW TEST:44.783 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":28,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:09.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 6 21:13:13.578: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8274 PodName:pod-sharedvolume-5cb996d7-2940-4c48-ab71-966e4badda3e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:13:13.578: INFO: >>> kubeConfig: /root/.kube/config I0406 21:13:13.613658 6 log.go:172] (0xc0029b8b00) (0xc0025314a0) Create stream I0406 21:13:13.613695 6 log.go:172] (0xc0029b8b00) (0xc0025314a0) Stream added, broadcasting: 1 I0406 21:13:13.615547 6 log.go:172] (0xc0029b8b00) Reply frame received for 1 I0406 21:13:13.615597 6 log.go:172] (0xc0029b8b00) (0xc00288e460) Create stream I0406 21:13:13.615609 6 log.go:172] (0xc0029b8b00) (0xc00288e460) Stream added, broadcasting: 3 I0406 21:13:13.616404 6 log.go:172] (0xc0029b8b00) Reply frame received for 3 I0406 21:13:13.616439 6 log.go:172] (0xc0029b8b00) (0xc002730a00) Create stream I0406 21:13:13.616450 6 log.go:172] (0xc0029b8b00) (0xc002730a00) Stream added, broadcasting: 5 I0406 21:13:13.617385 6 log.go:172] (0xc0029b8b00) Reply frame received for 5 I0406 21:13:13.670826 6 log.go:172] (0xc0029b8b00) Data frame received for 5 I0406 21:13:13.670863 6 log.go:172] (0xc002730a00) (5) Data frame handling I0406 21:13:13.670920 6 log.go:172] (0xc0029b8b00) Data frame received for 3 I0406 21:13:13.670960 6 log.go:172] (0xc00288e460) (3) Data frame handling I0406 21:13:13.670985 6 log.go:172] (0xc00288e460) (3) Data frame sent I0406 21:13:13.671000 6 log.go:172] (0xc0029b8b00) Data frame received for 3 I0406 21:13:13.671013 6 log.go:172] (0xc00288e460) (3) Data frame handling I0406 21:13:13.672665 6 log.go:172] (0xc0029b8b00) Data frame received for 1 I0406 21:13:13.672714 6 log.go:172] (0xc0025314a0) (1) Data frame handling I0406 21:13:13.672745 6 log.go:172] (0xc0025314a0) (1) Data frame sent I0406 21:13:13.672769 6 log.go:172] (0xc0029b8b00) (0xc0025314a0) Stream removed, broadcasting: 1 I0406 21:13:13.672904 6 log.go:172] (0xc0029b8b00) Go away received I0406 21:13:13.673384 6 log.go:172] (0xc0029b8b00) (0xc0025314a0) Stream removed, broadcasting: 1 I0406 21:13:13.673410 6 log.go:172] (0xc0029b8b00) (0xc00288e460) Stream removed, broadcasting: 3 I0406 21:13:13.673422 6 log.go:172] (0xc0029b8b00) (0xc002730a00) Stream removed, broadcasting: 5 Apr 6 21:13:13.673: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:13.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8274" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":29,"skipped":526,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:13.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-6e7d8d0b-b869-4959-8a70-8fa08e20bcdb STEP: Creating a pod to test consume secrets Apr 6 21:13:13.759: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7" in namespace "projected-5959" to be "success or failure" Apr 6 21:13:13.769: INFO: Pod "pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.435838ms Apr 6 21:13:15.773: INFO: Pod "pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013911361s Apr 6 21:13:17.777: INFO: Pod "pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018285072s STEP: Saw pod success Apr 6 21:13:17.777: INFO: Pod "pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7" satisfied condition "success or failure" Apr 6 21:13:17.780: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7 container projected-secret-volume-test: STEP: delete the pod Apr 6 21:13:17.834: INFO: Waiting for pod pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7 to disappear Apr 6 21:13:17.837: INFO: Pod pod-projected-secrets-7f0ba24c-6297-452a-86aa-46427eaad1a7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:17.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5959" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":527,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:17.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1f131698-db2d-4c7f-8781-379f80ee09a6 STEP: Creating a pod to test consume configMaps Apr 6 21:13:17.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7" in namespace "projected-607" to be "success or failure" Apr 6 21:13:17.954: INFO: Pod "pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.675746ms Apr 6 21:13:19.978: INFO: Pod "pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048688656s Apr 6 21:13:21.982: INFO: Pod "pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052657067s STEP: Saw pod success Apr 6 21:13:21.982: INFO: Pod "pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7" satisfied condition "success or failure" Apr 6 21:13:21.984: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7 container projected-configmap-volume-test: STEP: delete the pod Apr 6 21:13:22.010: INFO: Waiting for pod pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7 to disappear Apr 6 21:13:22.038: INFO: Pod pod-projected-configmaps-6cba8772-4b65-47f5-af21-1709a131e8e7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:22.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-607" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":527,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:22.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:13:22.085: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 6 21:13:25.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7725 create -f -' Apr 6 21:13:28.029: INFO: stderr: "" Apr 6 21:13:28.029: INFO: stdout: "e2e-test-crd-publish-openapi-8871-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 6 21:13:28.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7725 delete e2e-test-crd-publish-openapi-8871-crds test-cr' Apr 6 21:13:28.126: INFO: stderr: "" Apr 6 21:13:28.126: INFO: stdout: "e2e-test-crd-publish-openapi-8871-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 6 21:13:28.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7725 apply -f -' Apr 6 21:13:28.357: INFO: stderr: "" Apr 6 21:13:28.358: INFO: stdout: "e2e-test-crd-publish-openapi-8871-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 6 21:13:28.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7725 delete e2e-test-crd-publish-openapi-8871-crds test-cr' Apr 6 21:13:28.457: INFO: stderr: "" Apr 6 21:13:28.457: INFO: stdout: "e2e-test-crd-publish-openapi-8871-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 6 21:13:28.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8871-crds' Apr 6 21:13:28.694: INFO: stderr: "" Apr 6 21:13:28.694: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8871-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:31.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7725" for this suite. • [SLOW TEST:9.595 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":32,"skipped":540,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:31.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:13:31.730: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d" in namespace "projected-1177" to be "success or failure" Apr 6 21:13:31.745: INFO: Pod "downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.558351ms Apr 6 21:13:33.749: INFO: Pod "downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019636084s Apr 6 21:13:35.754: INFO: Pod "downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024174744s STEP: Saw pod success Apr 6 21:13:35.754: INFO: Pod "downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d" satisfied condition "success or failure" Apr 6 21:13:35.757: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d container client-container: STEP: delete the pod Apr 6 21:13:35.783: INFO: Waiting for pod downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d to disappear Apr 6 21:13:35.787: INFO: Pod downwardapi-volume-9d4a6bfc-2007-4af3-8551-7608241ae63d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:35.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1177" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":562,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:35.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:35.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9409" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":34,"skipped":569,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:35.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 6 21:13:35.931: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:13:36.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9519" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":35,"skipped":573,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:13:36.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-676 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 6 21:13:36.125: INFO: Found 0 stateful pods, waiting for 3 Apr 6 21:13:46.130: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 6 21:13:46.130: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 6 21:13:46.130: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 6 21:13:56.130: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 6 21:13:56.130: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 6 21:13:56.130: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 6 21:13:56.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-676 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 21:13:56.446: INFO: stderr: "I0406 21:13:56.301289 378 log.go:172] (0xc000105600) (0xc00067dae0) Create stream\nI0406 21:13:56.301359 378 log.go:172] (0xc000105600) (0xc00067dae0) Stream added, broadcasting: 1\nI0406 21:13:56.304238 378 log.go:172] (0xc000105600) Reply frame received for 1\nI0406 21:13:56.304282 378 log.go:172] (0xc000105600) (0xc000a3a000) Create stream\nI0406 21:13:56.304297 378 log.go:172] (0xc000105600) (0xc000a3a000) Stream added, broadcasting: 3\nI0406 21:13:56.305644 378 log.go:172] (0xc000105600) Reply frame received for 3\nI0406 21:13:56.305693 378 log.go:172] (0xc000105600) (0xc000024000) Create stream\nI0406 21:13:56.305710 378 log.go:172] (0xc000105600) (0xc000024000) Stream added, broadcasting: 5\nI0406 21:13:56.306741 378 log.go:172] (0xc000105600) Reply frame received for 5\nI0406 21:13:56.405810 378 log.go:172] (0xc000105600) Data frame received for 5\nI0406 21:13:56.405841 378 log.go:172] (0xc000024000) (5) Data frame handling\nI0406 21:13:56.405863 378 log.go:172] (0xc000024000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 21:13:56.440194 378 log.go:172] (0xc000105600) Data frame received for 3\nI0406 21:13:56.440275 378 log.go:172] (0xc000a3a000) (3) Data frame handling\nI0406 21:13:56.440298 378 log.go:172] (0xc000a3a000) (3) Data frame sent\nI0406 21:13:56.440316 378 log.go:172] (0xc000105600) Data frame received for 3\nI0406 21:13:56.440333 378 log.go:172] (0xc000a3a000) (3) Data frame handling\nI0406 21:13:56.440402 378 log.go:172] (0xc000105600) Data frame received for 5\nI0406 21:13:56.440438 378 log.go:172] (0xc000024000) (5) Data frame handling\nI0406 21:13:56.442843 378 log.go:172] (0xc000105600) Data frame received for 1\nI0406 21:13:56.442864 378 log.go:172] (0xc00067dae0) (1) Data frame handling\nI0406 21:13:56.442885 378 log.go:172] (0xc00067dae0) (1) Data frame sent\nI0406 21:13:56.442922 378 log.go:172] (0xc000105600) (0xc00067dae0) Stream removed, broadcasting: 1\nI0406 21:13:56.442948 378 log.go:172] (0xc000105600) Go away received\nI0406 21:13:56.443306 378 log.go:172] (0xc000105600) (0xc00067dae0) Stream removed, broadcasting: 1\nI0406 21:13:56.443319 378 log.go:172] (0xc000105600) (0xc000a3a000) Stream removed, broadcasting: 3\nI0406 21:13:56.443325 378 log.go:172] (0xc000105600) (0xc000024000) Stream removed, broadcasting: 5\n" Apr 6 21:13:56.446: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 21:13:56.446: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 6 21:14:06.500: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 6 21:14:16.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-676 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 21:14:16.747: INFO: stderr: "I0406 21:14:16.656171 401 log.go:172] (0xc000a531e0) (0xc0009e2820) Create stream\nI0406 21:14:16.656288 401 log.go:172] (0xc000a531e0) (0xc0009e2820) Stream added, broadcasting: 1\nI0406 21:14:16.660952 401 log.go:172] (0xc000a531e0) Reply frame received for 1\nI0406 21:14:16.660999 401 log.go:172] (0xc000a531e0) (0xc0005a0640) Create stream\nI0406 21:14:16.661017 401 log.go:172] (0xc000a531e0) (0xc0005a0640) Stream added, broadcasting: 3\nI0406 21:14:16.662335 401 log.go:172] (0xc000a531e0) Reply frame received for 3\nI0406 21:14:16.662389 401 log.go:172] (0xc000a531e0) (0xc000759400) Create stream\nI0406 21:14:16.662412 401 log.go:172] (0xc000a531e0) (0xc000759400) Stream added, broadcasting: 5\nI0406 21:14:16.663617 401 log.go:172] (0xc000a531e0) Reply frame received for 5\nI0406 21:14:16.742841 401 log.go:172] (0xc000a531e0) Data frame received for 3\nI0406 21:14:16.742877 401 log.go:172] (0xc0005a0640) (3) Data frame handling\nI0406 21:14:16.742894 401 log.go:172] (0xc0005a0640) (3) Data frame sent\nI0406 21:14:16.742904 401 log.go:172] (0xc000a531e0) Data frame received for 3\nI0406 21:14:16.742910 401 log.go:172] (0xc0005a0640) (3) Data frame handling\nI0406 21:14:16.742936 401 log.go:172] (0xc000a531e0) Data frame received for 5\nI0406 21:14:16.742943 401 log.go:172] (0xc000759400) (5) Data frame handling\nI0406 21:14:16.742954 401 log.go:172] (0xc000759400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 21:14:16.742969 401 log.go:172] (0xc000a531e0) Data frame received for 5\nI0406 21:14:16.743047 401 log.go:172] (0xc000759400) (5) Data frame handling\nI0406 21:14:16.743862 401 log.go:172] (0xc000a531e0) Data frame received for 1\nI0406 21:14:16.743876 401 log.go:172] (0xc0009e2820) (1) Data frame handling\nI0406 21:14:16.743883 401 log.go:172] (0xc0009e2820) (1) Data frame sent\nI0406 21:14:16.743893 401 log.go:172] (0xc000a531e0) (0xc0009e2820) Stream removed, broadcasting: 1\nI0406 21:14:16.743903 401 log.go:172] (0xc000a531e0) Go away received\nI0406 21:14:16.744193 401 log.go:172] (0xc000a531e0) (0xc0009e2820) Stream removed, broadcasting: 1\nI0406 21:14:16.744208 401 log.go:172] (0xc000a531e0) (0xc0005a0640) Stream removed, broadcasting: 3\nI0406 21:14:16.744214 401 log.go:172] (0xc000a531e0) (0xc000759400) Stream removed, broadcasting: 5\n" Apr 6 21:14:16.747: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 21:14:16.747: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Apr 6 21:14:36.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-676 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 21:14:37.010: INFO: stderr: "I0406 21:14:36.895858 424 log.go:172] (0xc000104f20) (0xc000950000) Create stream\nI0406 21:14:36.895903 424 log.go:172] (0xc000104f20) (0xc000950000) Stream added, broadcasting: 1\nI0406 21:14:36.898434 424 log.go:172] (0xc000104f20) Reply frame received for 1\nI0406 21:14:36.898488 424 log.go:172] (0xc000104f20) (0xc0006eba40) Create stream\nI0406 21:14:36.898505 424 log.go:172] (0xc000104f20) (0xc0006eba40) Stream added, broadcasting: 3\nI0406 21:14:36.899505 424 log.go:172] (0xc000104f20) Reply frame received for 3\nI0406 21:14:36.899532 424 log.go:172] (0xc000104f20) (0xc0001ba000) Create stream\nI0406 21:14:36.899540 424 log.go:172] (0xc000104f20) (0xc0001ba000) Stream added, broadcasting: 5\nI0406 21:14:36.900519 424 log.go:172] (0xc000104f20) Reply frame received for 5\nI0406 21:14:36.977658 424 log.go:172] (0xc000104f20) Data frame received for 5\nI0406 21:14:36.977714 424 log.go:172] (0xc0001ba000) (5) Data frame handling\nI0406 21:14:36.977739 424 log.go:172] (0xc0001ba000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 21:14:37.004519 424 log.go:172] (0xc000104f20) Data frame received for 3\nI0406 21:14:37.004558 424 log.go:172] (0xc0006eba40) (3) Data frame handling\nI0406 21:14:37.004583 424 log.go:172] (0xc0006eba40) (3) Data frame sent\nI0406 21:14:37.004735 424 log.go:172] (0xc000104f20) Data frame received for 5\nI0406 21:14:37.004755 424 log.go:172] (0xc0001ba000) (5) Data frame handling\nI0406 21:14:37.004793 424 log.go:172] (0xc000104f20) Data frame received for 3\nI0406 21:14:37.004806 424 log.go:172] (0xc0006eba40) (3) Data frame handling\nI0406 21:14:37.006593 424 log.go:172] (0xc000104f20) Data frame received for 1\nI0406 21:14:37.006625 424 log.go:172] (0xc000950000) (1) Data frame handling\nI0406 21:14:37.006651 424 log.go:172] (0xc000950000) (1) Data frame sent\nI0406 21:14:37.006671 424 log.go:172] (0xc000104f20) (0xc000950000) Stream removed, broadcasting: 1\nI0406 21:14:37.006689 424 log.go:172] (0xc000104f20) Go away received\nI0406 21:14:37.007162 424 log.go:172] (0xc000104f20) (0xc000950000) Stream removed, broadcasting: 1\nI0406 21:14:37.007186 424 log.go:172] (0xc000104f20) (0xc0006eba40) Stream removed, broadcasting: 3\nI0406 21:14:37.007197 424 log.go:172] (0xc000104f20) (0xc0001ba000) Stream removed, broadcasting: 5\n" Apr 6 21:14:37.010: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 21:14:37.010: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 21:14:47.044: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 6 21:14:57.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-676 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 21:14:57.340: INFO: stderr: "I0406 21:14:57.232986 444 log.go:172] (0xc0009706e0) (0xc000abe000) Create stream\nI0406 21:14:57.233049 444 log.go:172] (0xc0009706e0) (0xc000abe000) Stream added, broadcasting: 1\nI0406 21:14:57.235598 444 log.go:172] (0xc0009706e0) Reply frame received for 1\nI0406 21:14:57.235629 444 log.go:172] (0xc0009706e0) (0xc0006a9cc0) Create stream\nI0406 21:14:57.235638 444 log.go:172] (0xc0009706e0) (0xc0006a9cc0) Stream added, broadcasting: 3\nI0406 21:14:57.236580 444 log.go:172] (0xc0009706e0) Reply frame received for 3\nI0406 21:14:57.236631 444 log.go:172] (0xc0009706e0) (0xc000abe0a0) Create stream\nI0406 21:14:57.236646 444 log.go:172] (0xc0009706e0) (0xc000abe0a0) Stream added, broadcasting: 5\nI0406 21:14:57.237757 444 log.go:172] (0xc0009706e0) Reply frame received for 5\nI0406 21:14:57.327405 444 log.go:172] (0xc0009706e0) Data frame received for 3\nI0406 21:14:57.327462 444 log.go:172] (0xc0006a9cc0) (3) Data frame handling\nI0406 21:14:57.327482 444 log.go:172] (0xc0006a9cc0) (3) Data frame sent\nI0406 21:14:57.327499 444 log.go:172] (0xc0009706e0) Data frame received for 3\nI0406 21:14:57.327512 444 log.go:172] (0xc0006a9cc0) (3) Data frame handling\nI0406 21:14:57.327554 444 log.go:172] (0xc0009706e0) Data frame received for 5\nI0406 21:14:57.327582 444 log.go:172] (0xc000abe0a0) (5) Data frame handling\nI0406 21:14:57.327607 444 log.go:172] (0xc000abe0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 21:14:57.327630 444 log.go:172] (0xc0009706e0) Data frame received for 5\nI0406 21:14:57.327723 444 log.go:172] (0xc000abe0a0) (5) Data frame handling\nI0406 21:14:57.329080 444 log.go:172] (0xc0009706e0) Data frame received for 1\nI0406 21:14:57.329104 444 log.go:172] (0xc000abe000) (1) Data frame handling\nI0406 21:14:57.329223 444 log.go:172] (0xc000abe000) (1) Data frame sent\nI0406 21:14:57.329244 444 log.go:172] (0xc0009706e0) (0xc000abe000) Stream removed, broadcasting: 1\nI0406 21:14:57.329405 444 log.go:172] (0xc0009706e0) Go away received\nI0406 21:14:57.329588 444 log.go:172] (0xc0009706e0) (0xc000abe000) Stream removed, broadcasting: 1\nI0406 21:14:57.329615 444 log.go:172] (0xc0009706e0) (0xc0006a9cc0) Stream removed, broadcasting: 3\nI0406 21:14:57.329631 444 log.go:172] (0xc0009706e0) (0xc000abe0a0) Stream removed, broadcasting: 5\n" Apr 6 21:14:57.340: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 21:14:57.340: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 21:15:07.359: INFO: Waiting for StatefulSet statefulset-676/ss2 to complete update Apr 6 21:15:07.359: INFO: Waiting for Pod statefulset-676/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 6 21:15:07.359: INFO: Waiting for Pod statefulset-676/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 6 21:15:07.359: INFO: Waiting for Pod statefulset-676/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 6 21:15:17.410: INFO: Waiting for StatefulSet statefulset-676/ss2 to complete update Apr 6 21:15:17.410: INFO: Waiting for Pod statefulset-676/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 6 21:15:27.366: INFO: Deleting all statefulset in ns statefulset-676 Apr 6 21:15:27.368: INFO: Scaling statefulset ss2 to 0 Apr 6 21:15:47.387: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 21:15:47.390: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:15:47.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-676" for this suite. • [SLOW TEST:131.392 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":36,"skipped":574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:15:47.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-ac901f10-a47e-4f94-b05d-84a444194d16 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:15:47.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1286" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":37,"skipped":602,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:15:47.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 6 21:15:47.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 6 21:15:47.640: INFO: stderr: "" Apr 6 21:15:47.640: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:15:47.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9128" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":38,"skipped":608,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:15:47.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1513 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1513 STEP: Creating statefulset with conflicting port in namespace statefulset-1513 STEP: Waiting until pod test-pod will start running in namespace statefulset-1513 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1513 Apr 6 21:15:51.785: INFO: Observed stateful pod in namespace: statefulset-1513, name: ss-0, uid: fa220cd1-00a1-4694-81aa-8265136f523c, status phase: Pending. Waiting for statefulset controller to delete. Apr 6 21:15:51.967: INFO: Observed stateful pod in namespace: statefulset-1513, name: ss-0, uid: fa220cd1-00a1-4694-81aa-8265136f523c, status phase: Failed. Waiting for statefulset controller to delete. Apr 6 21:15:52.030: INFO: Observed stateful pod in namespace: statefulset-1513, name: ss-0, uid: fa220cd1-00a1-4694-81aa-8265136f523c, status phase: Failed. Waiting for statefulset controller to delete. Apr 6 21:15:52.042: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1513 STEP: Removing pod with conflicting port in namespace statefulset-1513 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1513 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 6 21:15:56.107: INFO: Deleting all statefulset in ns statefulset-1513 Apr 6 21:15:56.110: INFO: Scaling statefulset ss to 0 Apr 6 21:16:06.155: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 21:16:06.158: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:16:06.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1513" for this suite. • [SLOW TEST:18.528 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":39,"skipped":622,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:16:06.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 21:16:12.321: INFO: DNS probes using dns-test-80bbaf3f-31ee-431a-a134-de07937ed69c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 21:16:18.425: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:18.428: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:18.428: INFO: Lookups using dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] Apr 6 21:16:23.433: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:23.436: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:23.437: INFO: Lookups using dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] Apr 6 21:16:28.433: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:28.437: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:28.437: INFO: Lookups using dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] Apr 6 21:16:33.433: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:33.436: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:33.436: INFO: Lookups using dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] Apr 6 21:16:38.434: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:38.438: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 6 21:16:38.438: INFO: Lookups using dns-3070/dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] Apr 6 21:16:43.437: INFO: DNS probes using dns-test-ef87db3e-210b-4d3a-bae3-c96427ef1117 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 21:16:50.162: INFO: DNS probes using dns-test-61711494-c536-44e4-a2a3-8d5152ecda46 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:16:50.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3070" for this suite. • [SLOW TEST:44.106 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":40,"skipped":625,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:16:50.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:16:50.577: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 15.48315ms) Apr 6 21:16:50.580: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.713152ms) Apr 6 21:16:50.583: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.89718ms) Apr 6 21:16:50.586: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.763436ms) Apr 6 21:16:50.588: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.70904ms) Apr 6 21:16:50.591: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.753277ms) Apr 6 21:16:50.594: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.71251ms) Apr 6 21:16:50.596: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.523898ms) Apr 6 21:16:50.599: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.725669ms) Apr 6 21:16:50.602: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.263198ms) Apr 6 21:16:50.606: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.328395ms) Apr 6 21:16:50.609: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.323664ms) Apr 6 21:16:50.613: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.384209ms) Apr 6 21:16:50.616: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.618348ms) Apr 6 21:16:50.620: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.538758ms) Apr 6 21:16:50.623: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.494738ms) Apr 6 21:16:50.627: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.732252ms) Apr 6 21:16:50.631: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.924298ms) Apr 6 21:16:50.635: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.09133ms) Apr 6 21:16:50.638: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.188789ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:16:50.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4988" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":41,"skipped":627,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:16:50.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 6 21:16:58.766: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 6 21:16:58.788: INFO: Pod pod-with-poststart-http-hook still exists Apr 6 21:17:00.788: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 6 21:17:00.793: INFO: Pod pod-with-poststart-http-hook still exists Apr 6 21:17:02.788: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 6 21:17:02.792: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:17:02.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9098" for this suite. • [SLOW TEST:12.153 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:17:02.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 6 21:17:02.873: INFO: Waiting up to 5m0s for pod "pod-7272992a-c5c4-4e90-940d-ad61bfd685a5" in namespace "emptydir-9208" to be "success or failure" Apr 6 21:17:02.877: INFO: Pod "pod-7272992a-c5c4-4e90-940d-ad61bfd685a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009815ms Apr 6 21:17:04.881: INFO: Pod "pod-7272992a-c5c4-4e90-940d-ad61bfd685a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008595466s Apr 6 21:17:06.885: INFO: Pod "pod-7272992a-c5c4-4e90-940d-ad61bfd685a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01268923s STEP: Saw pod success Apr 6 21:17:06.886: INFO: Pod "pod-7272992a-c5c4-4e90-940d-ad61bfd685a5" satisfied condition "success or failure" Apr 6 21:17:06.888: INFO: Trying to get logs from node jerma-worker pod pod-7272992a-c5c4-4e90-940d-ad61bfd685a5 container test-container: STEP: delete the pod Apr 6 21:17:06.932: INFO: Waiting for pod pod-7272992a-c5c4-4e90-940d-ad61bfd685a5 to disappear Apr 6 21:17:06.942: INFO: Pod pod-7272992a-c5c4-4e90-940d-ad61bfd685a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:17:06.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9208" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":664,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:17:06.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:18:07.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7537" for this suite. • [SLOW TEST:60.123 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":672,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:18:07.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:18:07.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d" in namespace "projected-4583" to be "success or failure" Apr 6 21:18:07.173: INFO: Pod "downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.634151ms Apr 6 21:18:09.177: INFO: Pod "downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030461887s Apr 6 21:18:11.182: INFO: Pod "downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034776095s STEP: Saw pod success Apr 6 21:18:11.182: INFO: Pod "downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d" satisfied condition "success or failure" Apr 6 21:18:11.185: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d container client-container: STEP: delete the pod Apr 6 21:18:11.208: INFO: Waiting for pod downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d to disappear Apr 6 21:18:11.225: INFO: Pod downwardapi-volume-ecaeeea7-37c3-46c6-bba0-d4fd25f79e6d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:18:11.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4583" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":672,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:18:11.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8897 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8897 STEP: creating replication controller externalsvc in namespace services-8897 I0406 21:18:11.375149 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8897, replica count: 2 I0406 21:18:14.425535 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:18:17.425798 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 6 21:18:17.479: INFO: Creating new exec pod Apr 6 21:18:21.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8897 execpodh6gcg -- /bin/sh -x -c nslookup clusterip-service' Apr 6 21:18:21.758: INFO: stderr: "I0406 21:18:21.647038 486 log.go:172] (0xc00011abb0) (0xc000257360) Create stream\nI0406 21:18:21.647121 486 log.go:172] (0xc00011abb0) (0xc000257360) Stream added, broadcasting: 1\nI0406 21:18:21.654180 486 log.go:172] (0xc00011abb0) Reply frame received for 1\nI0406 21:18:21.654363 486 log.go:172] (0xc00011abb0) (0xc0005ff900) Create stream\nI0406 21:18:21.654510 486 log.go:172] (0xc00011abb0) (0xc0005ff900) Stream added, broadcasting: 3\nI0406 21:18:21.658941 486 log.go:172] (0xc00011abb0) Reply frame received for 3\nI0406 21:18:21.658963 486 log.go:172] (0xc00011abb0) (0xc0005ffae0) Create stream\nI0406 21:18:21.658972 486 log.go:172] (0xc00011abb0) (0xc0005ffae0) Stream added, broadcasting: 5\nI0406 21:18:21.660785 486 log.go:172] (0xc00011abb0) Reply frame received for 5\nI0406 21:18:21.739436 486 log.go:172] (0xc00011abb0) Data frame received for 5\nI0406 21:18:21.739461 486 log.go:172] (0xc0005ffae0) (5) Data frame handling\nI0406 21:18:21.739487 486 log.go:172] (0xc0005ffae0) (5) Data frame sent\n+ nslookup clusterip-service\nI0406 21:18:21.749897 486 log.go:172] (0xc00011abb0) Data frame received for 3\nI0406 21:18:21.749913 486 log.go:172] (0xc0005ff900) (3) Data frame handling\nI0406 21:18:21.749924 486 log.go:172] (0xc0005ff900) (3) Data frame sent\nI0406 21:18:21.750988 486 log.go:172] (0xc00011abb0) Data frame received for 3\nI0406 21:18:21.751009 486 log.go:172] (0xc0005ff900) (3) Data frame handling\nI0406 21:18:21.751023 486 log.go:172] (0xc0005ff900) (3) Data frame sent\nI0406 21:18:21.751519 486 log.go:172] (0xc00011abb0) Data frame received for 5\nI0406 21:18:21.751537 486 log.go:172] (0xc0005ffae0) (5) Data frame handling\nI0406 21:18:21.751740 486 log.go:172] (0xc00011abb0) Data frame received for 3\nI0406 21:18:21.751758 486 log.go:172] (0xc0005ff900) (3) Data frame handling\nI0406 21:18:21.753664 486 log.go:172] (0xc00011abb0) Data frame received for 1\nI0406 21:18:21.753702 486 log.go:172] (0xc000257360) (1) Data frame handling\nI0406 21:18:21.753733 486 log.go:172] (0xc000257360) (1) Data frame sent\nI0406 21:18:21.753767 486 log.go:172] (0xc00011abb0) (0xc000257360) Stream removed, broadcasting: 1\nI0406 21:18:21.753799 486 log.go:172] (0xc00011abb0) Go away received\nI0406 21:18:21.754057 486 log.go:172] (0xc00011abb0) (0xc000257360) Stream removed, broadcasting: 1\nI0406 21:18:21.754082 486 log.go:172] (0xc00011abb0) (0xc0005ff900) Stream removed, broadcasting: 3\nI0406 21:18:21.754092 486 log.go:172] (0xc00011abb0) (0xc0005ffae0) Stream removed, broadcasting: 5\n" Apr 6 21:18:21.758: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8897.svc.cluster.local\tcanonical name = externalsvc.services-8897.svc.cluster.local.\nName:\texternalsvc.services-8897.svc.cluster.local\nAddress: 10.107.246.20\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8897, will wait for the garbage collector to delete the pods Apr 6 21:18:21.818: INFO: Deleting ReplicationController externalsvc took: 6.251908ms Apr 6 21:18:22.118: INFO: Terminating ReplicationController externalsvc pods took: 300.237515ms Apr 6 21:18:29.570: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:18:29.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8897" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.375 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":46,"skipped":677,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:18:29.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-52dd9820-270a-4810-ba89-437000effcd0 in namespace container-probe-3537 Apr 6 21:18:33.724: INFO: Started pod liveness-52dd9820-270a-4810-ba89-437000effcd0 in namespace container-probe-3537 STEP: checking the pod's current state and verifying that restartCount is present Apr 6 21:18:33.727: INFO: Initial restart count of pod liveness-52dd9820-270a-4810-ba89-437000effcd0 is 0 Apr 6 21:18:47.765: INFO: Restart count of pod container-probe-3537/liveness-52dd9820-270a-4810-ba89-437000effcd0 is now 1 (14.037028312s elapsed) Apr 6 21:19:07.805: INFO: Restart count of pod container-probe-3537/liveness-52dd9820-270a-4810-ba89-437000effcd0 is now 2 (34.077669187s elapsed) Apr 6 21:19:27.846: INFO: Restart count of pod container-probe-3537/liveness-52dd9820-270a-4810-ba89-437000effcd0 is now 3 (54.118761676s elapsed) Apr 6 21:19:47.896: INFO: Restart count of pod container-probe-3537/liveness-52dd9820-270a-4810-ba89-437000effcd0 is now 4 (1m14.16879681s elapsed) Apr 6 21:20:48.064: INFO: Restart count of pod container-probe-3537/liveness-52dd9820-270a-4810-ba89-437000effcd0 is now 5 (2m14.336054982s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:20:48.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3537" for this suite. • [SLOW TEST:138.528 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":690,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:20:48.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ph4fc in namespace proxy-8715 I0406 21:20:48.541992 6 runners.go:189] Created replication controller with name: proxy-service-ph4fc, namespace: proxy-8715, replica count: 1 I0406 21:20:49.592458 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:20:50.592717 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:20:51.592982 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:52.593358 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:53.593628 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:54.593876 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:55.594128 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:56.594378 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:57.594646 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:58.594846 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:20:59.595111 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0406 21:21:00.595338 6 runners.go:189] proxy-service-ph4fc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 6 21:21:00.599: INFO: setup took 12.397154092s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 6 21:21:00.613: INFO: (0) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 13.809688ms) Apr 6 21:21:00.613: INFO: (0) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 13.717587ms) Apr 6 21:21:00.613: INFO: (0) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 14.105776ms) Apr 6 21:21:00.613: INFO: (0) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 14.099349ms) Apr 6 21:21:00.613: INFO: (0) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 14.281538ms) Apr 6 21:21:00.613: INFO: (0) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 14.326702ms) Apr 6 21:21:00.614: INFO: (0) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 15.24242ms) Apr 6 21:21:00.614: INFO: (0) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 15.328886ms) Apr 6 21:21:00.616: INFO: (0) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 16.919059ms) Apr 6 21:21:00.616: INFO: (0) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 16.861744ms) Apr 6 21:21:00.621: INFO: (0) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 21.711101ms) Apr 6 21:21:00.621: INFO: (0) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test<... (200; 3.107095ms) Apr 6 21:21:00.626: INFO: (1) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 5.064839ms) Apr 6 21:21:00.626: INFO: (1) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.159287ms) Apr 6 21:21:00.627: INFO: (1) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.682371ms) Apr 6 21:21:00.627: INFO: (1) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: ... (200; 6.455764ms) Apr 6 21:21:00.629: INFO: (1) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 7.399457ms) Apr 6 21:21:00.629: INFO: (1) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 7.469775ms) Apr 6 21:21:00.629: INFO: (1) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 7.568024ms) Apr 6 21:21:00.633: INFO: (2) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.176236ms) Apr 6 21:21:00.633: INFO: (2) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 4.315682ms) Apr 6 21:21:00.633: INFO: (2) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 4.272279ms) Apr 6 21:21:00.633: INFO: (2) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test<... (200; 4.727276ms) Apr 6 21:21:00.634: INFO: (2) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 4.726264ms) Apr 6 21:21:00.634: INFO: (2) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 5.198932ms) Apr 6 21:21:00.634: INFO: (2) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 5.168538ms) Apr 6 21:21:00.634: INFO: (2) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.259469ms) Apr 6 21:21:00.634: INFO: (2) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 5.4584ms) Apr 6 21:21:00.634: INFO: (2) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 5.389042ms) Apr 6 21:21:00.635: INFO: (2) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.676089ms) Apr 6 21:21:00.635: INFO: (2) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.720618ms) Apr 6 21:21:00.637: INFO: (3) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: ... (200; 3.067036ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.151933ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 4.194451ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.376235ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 4.301535ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.4125ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.355715ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 4.357993ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 4.48836ms) Apr 6 21:21:00.639: INFO: (3) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 4.734609ms) Apr 6 21:21:00.640: INFO: (3) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 5.206012ms) Apr 6 21:21:00.640: INFO: (3) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 5.21719ms) Apr 6 21:21:00.640: INFO: (3) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 5.340251ms) Apr 6 21:21:00.640: INFO: (3) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 5.344329ms) Apr 6 21:21:00.640: INFO: (3) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 5.431198ms) Apr 6 21:21:00.642: INFO: (4) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 1.908466ms) Apr 6 21:21:00.644: INFO: (4) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 3.988523ms) Apr 6 21:21:00.644: INFO: (4) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.965781ms) Apr 6 21:21:00.645: INFO: (4) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 4.207551ms) Apr 6 21:21:00.645: INFO: (4) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 4.176828ms) Apr 6 21:21:00.645: INFO: (4) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.393341ms) Apr 6 21:21:00.645: INFO: (4) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 4.570314ms) Apr 6 21:21:00.645: INFO: (4) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: ... (200; 5.190731ms) Apr 6 21:21:00.646: INFO: (4) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 5.304848ms) Apr 6 21:21:00.646: INFO: (4) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 5.308094ms) Apr 6 21:21:00.646: INFO: (4) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 5.32369ms) Apr 6 21:21:00.648: INFO: (5) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 2.108558ms) Apr 6 21:21:00.648: INFO: (5) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 2.394009ms) Apr 6 21:21:00.650: INFO: (5) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 3.8597ms) Apr 6 21:21:00.650: INFO: (5) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.002697ms) Apr 6 21:21:00.650: INFO: (5) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.257423ms) Apr 6 21:21:00.650: INFO: (5) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.279424ms) Apr 6 21:21:00.650: INFO: (5) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 4.330181ms) Apr 6 21:21:00.650: INFO: (5) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.426572ms) Apr 6 21:21:00.651: INFO: (5) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 4.770508ms) Apr 6 21:21:00.651: INFO: (5) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 4.697596ms) Apr 6 21:21:00.651: INFO: (5) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test (200; 2.788329ms) Apr 6 21:21:00.656: INFO: (6) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 3.828298ms) Apr 6 21:21:00.656: INFO: (6) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 3.817219ms) Apr 6 21:21:00.656: INFO: (6) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.825058ms) Apr 6 21:21:00.656: INFO: (6) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 3.748184ms) Apr 6 21:21:00.656: INFO: (6) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 4.319886ms) Apr 6 21:21:00.656: INFO: (6) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 4.044782ms) Apr 6 21:21:00.656: INFO: (6) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 3.901897ms) Apr 6 21:21:00.657: INFO: (6) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.243677ms) Apr 6 21:21:00.657: INFO: (6) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 4.342839ms) Apr 6 21:21:00.657: INFO: (6) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 4.509051ms) Apr 6 21:21:00.657: INFO: (6) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test<... (200; 3.684663ms) Apr 6 21:21:00.662: INFO: (7) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 4.565328ms) Apr 6 21:21:00.662: INFO: (7) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 4.679312ms) Apr 6 21:21:00.662: INFO: (7) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 5.127164ms) Apr 6 21:21:00.662: INFO: (7) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 5.427868ms) Apr 6 21:21:00.662: INFO: (7) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.465932ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 5.559126ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 5.611615ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 5.581825ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 5.626889ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 5.678235ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.62605ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.832636ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.896514ms) Apr 6 21:21:00.663: INFO: (7) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test (200; 6.091017ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.702231ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.583995ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 3.681347ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 3.641587ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 3.919459ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 3.706589ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: ... (200; 3.914669ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 3.942676ms) Apr 6 21:21:00.667: INFO: (8) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 3.871935ms) Apr 6 21:21:00.669: INFO: (8) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 5.680144ms) Apr 6 21:21:00.669: INFO: (8) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 5.861774ms) Apr 6 21:21:00.669: INFO: (8) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 6.087511ms) Apr 6 21:21:00.669: INFO: (8) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 6.023051ms) Apr 6 21:21:00.669: INFO: (8) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 6.163632ms) Apr 6 21:21:00.669: INFO: (8) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 6.092956ms) Apr 6 21:21:00.673: INFO: (9) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 3.861058ms) Apr 6 21:21:00.674: INFO: (9) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.021428ms) Apr 6 21:21:00.674: INFO: (9) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 4.105605ms) Apr 6 21:21:00.674: INFO: (9) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.166289ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 15.201268ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 15.271085ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 15.350252ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 15.368125ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test<... (200; 15.388556ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 15.464563ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 15.560788ms) Apr 6 21:21:00.685: INFO: (9) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 15.778885ms) Apr 6 21:21:00.686: INFO: (9) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 16.084448ms) Apr 6 21:21:00.686: INFO: (9) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 16.080891ms) Apr 6 21:21:00.686: INFO: (9) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 16.070552ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 42.254268ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 42.313464ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 42.351302ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 42.435845ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 42.449421ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 42.536545ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 42.5622ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 42.644101ms) Apr 6 21:21:00.728: INFO: (10) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: ... (200; 4.80305ms) Apr 6 21:21:00.736: INFO: (11) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test (200; 5.091835ms) Apr 6 21:21:00.737: INFO: (11) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.144008ms) Apr 6 21:21:00.737: INFO: (11) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 5.014681ms) Apr 6 21:21:00.737: INFO: (11) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.255723ms) Apr 6 21:21:00.737: INFO: (11) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 5.764574ms) Apr 6 21:21:00.738: INFO: (11) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 6.244764ms) Apr 6 21:21:00.738: INFO: (11) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 6.205238ms) Apr 6 21:21:00.738: INFO: (11) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 6.175337ms) Apr 6 21:21:00.738: INFO: (11) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 6.332724ms) Apr 6 21:21:00.738: INFO: (11) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 6.41612ms) Apr 6 21:21:00.738: INFO: (11) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 6.423156ms) Apr 6 21:21:00.740: INFO: (12) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 2.05289ms) Apr 6 21:21:00.742: INFO: (12) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.548797ms) Apr 6 21:21:00.742: INFO: (12) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.55203ms) Apr 6 21:21:00.742: INFO: (12) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test<... (200; 4.638839ms) Apr 6 21:21:00.743: INFO: (12) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 4.72955ms) Apr 6 21:21:00.743: INFO: (12) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 4.789954ms) Apr 6 21:21:00.743: INFO: (12) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.832643ms) Apr 6 21:21:00.743: INFO: (12) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 4.89841ms) Apr 6 21:21:00.743: INFO: (12) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 5.518732ms) Apr 6 21:21:00.744: INFO: (12) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 5.730186ms) Apr 6 21:21:00.744: INFO: (12) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 5.852468ms) Apr 6 21:21:00.744: INFO: (12) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 5.816671ms) Apr 6 21:21:00.746: INFO: (13) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 2.473979ms) Apr 6 21:21:00.752: INFO: (13) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 7.894165ms) Apr 6 21:21:00.752: INFO: (13) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 7.899159ms) Apr 6 21:21:00.754: INFO: (13) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 10.007659ms) Apr 6 21:21:00.754: INFO: (13) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 10.091823ms) Apr 6 21:21:00.754: INFO: (13) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: ... (200; 10.534053ms) Apr 6 21:21:00.755: INFO: (13) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 10.794077ms) Apr 6 21:21:00.755: INFO: (13) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 11.237215ms) Apr 6 21:21:00.756: INFO: (13) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 11.87205ms) Apr 6 21:21:00.756: INFO: (13) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 12.086029ms) Apr 6 21:21:00.756: INFO: (13) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 12.268171ms) Apr 6 21:21:00.756: INFO: (13) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 12.23818ms) Apr 6 21:21:00.756: INFO: (13) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 12.25953ms) Apr 6 21:21:00.756: INFO: (13) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 12.475353ms) Apr 6 21:21:00.760: INFO: (14) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.627474ms) Apr 6 21:21:00.760: INFO: (14) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 3.949755ms) Apr 6 21:21:00.761: INFO: (14) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 3.965997ms) Apr 6 21:21:00.761: INFO: (14) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.093023ms) Apr 6 21:21:00.761: INFO: (14) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 4.149426ms) Apr 6 21:21:00.761: INFO: (14) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 4.2158ms) Apr 6 21:21:00.761: INFO: (14) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 4.289037ms) Apr 6 21:21:00.761: INFO: (14) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test<... (200; 3.773064ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 3.823165ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test (200; 4.200926ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.169205ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.142699ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 4.24301ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 4.248955ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 4.458325ms) Apr 6 21:21:00.766: INFO: (15) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 4.468716ms) Apr 6 21:21:00.767: INFO: (15) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 4.658868ms) Apr 6 21:21:00.767: INFO: (15) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 4.655917ms) Apr 6 21:21:00.767: INFO: (15) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 4.751744ms) Apr 6 21:21:00.767: INFO: (15) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 4.795799ms) Apr 6 21:21:00.770: INFO: (16) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 3.084174ms) Apr 6 21:21:00.770: INFO: (16) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 3.16911ms) Apr 6 21:21:00.771: INFO: (16) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.70207ms) Apr 6 21:21:00.771: INFO: (16) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.765966ms) Apr 6 21:21:00.771: INFO: (16) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 3.843883ms) Apr 6 21:21:00.771: INFO: (16) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 4.293081ms) Apr 6 21:21:00.771: INFO: (16) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 4.412121ms) Apr 6 21:21:00.771: INFO: (16) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test (200; 4.876553ms) Apr 6 21:21:00.772: INFO: (16) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 4.856666ms) Apr 6 21:21:00.775: INFO: (17) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 3.320542ms) Apr 6 21:21:00.775: INFO: (17) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 3.362235ms) Apr 6 21:21:00.775: INFO: (17) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.531164ms) Apr 6 21:21:00.775: INFO: (17) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 3.497099ms) Apr 6 21:21:00.775: INFO: (17) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 3.518854ms) Apr 6 21:21:00.776: INFO: (17) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 3.591088ms) Apr 6 21:21:00.776: INFO: (17) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test (200; 4.008833ms) Apr 6 21:21:00.776: INFO: (17) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname1/proxy/: foo (200; 4.544421ms) Apr 6 21:21:00.777: INFO: (17) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname1/proxy/: foo (200; 4.65124ms) Apr 6 21:21:00.777: INFO: (17) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 4.667503ms) Apr 6 21:21:00.777: INFO: (17) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 4.613395ms) Apr 6 21:21:00.777: INFO: (17) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 4.738181ms) Apr 6 21:21:00.780: INFO: (18) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 3.215024ms) Apr 6 21:21:00.780: INFO: (18) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 3.407206ms) Apr 6 21:21:00.780: INFO: (18) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:1080/proxy/: test<... (200; 3.45181ms) Apr 6 21:21:00.780: INFO: (18) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 3.479413ms) Apr 6 21:21:00.780: INFO: (18) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z/proxy/: test (200; 3.490338ms) Apr 6 21:21:00.782: INFO: (18) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 4.725619ms) Apr 6 21:21:00.782: INFO: (18) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.0661ms) Apr 6 21:21:00.782: INFO: (18) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 5.573352ms) Apr 6 21:21:00.782: INFO: (18) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 5.67222ms) Apr 6 21:21:00.782: INFO: (18) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test<... (200; 4.791005ms) Apr 6 21:21:00.788: INFO: (19) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.055526ms) Apr 6 21:21:00.788: INFO: (19) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.074282ms) Apr 6 21:21:00.788: INFO: (19) /api/v1/namespaces/proxy-8715/pods/proxy-service-ph4fc-29b4z:160/proxy/: foo (200; 5.361358ms) Apr 6 21:21:00.788: INFO: (19) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:162/proxy/: bar (200; 5.374237ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname1/proxy/: tls baz (200; 5.288067ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:443/proxy/: test (200; 5.656853ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:460/proxy/: tls baz (200; 5.678605ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/services/proxy-service-ph4fc:portname2/proxy/: bar (200; 5.60976ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/services/http:proxy-service-ph4fc:portname2/proxy/: bar (200; 5.711978ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/pods/https:proxy-service-ph4fc-29b4z:462/proxy/: tls qux (200; 5.696161ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/services/https:proxy-service-ph4fc:tlsportname2/proxy/: tls qux (200; 5.828125ms) Apr 6 21:21:00.789: INFO: (19) /api/v1/namespaces/proxy-8715/pods/http:proxy-service-ph4fc-29b4z:1080/proxy/: ... (200; 5.7686ms) STEP: deleting ReplicationController proxy-service-ph4fc in namespace proxy-8715, will wait for the garbage collector to delete the pods Apr 6 21:21:00.854: INFO: Deleting ReplicationController proxy-service-ph4fc took: 13.671491ms Apr 6 21:21:01.155: INFO: Terminating ReplicationController proxy-service-ph4fc pods took: 300.233243ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:21:09.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8715" for this suite. • [SLOW TEST:21.428 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":48,"skipped":706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:21:09.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9167 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 6 21:21:09.644: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 6 21:21:37.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:8080/dial?request=hostname&protocol=udp&host=10.244.1.160&port=8081&tries=1'] Namespace:pod-network-test-9167 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:21:37.770: INFO: >>> kubeConfig: /root/.kube/config I0406 21:21:37.806624 6 log.go:172] (0xc0029b8420) (0xc002730640) Create stream I0406 21:21:37.806653 6 log.go:172] (0xc0029b8420) (0xc002730640) Stream added, broadcasting: 1 I0406 21:21:37.808456 6 log.go:172] (0xc0029b8420) Reply frame received for 1 I0406 21:21:37.808495 6 log.go:172] (0xc0029b8420) (0xc002816000) Create stream I0406 21:21:37.808504 6 log.go:172] (0xc0029b8420) (0xc002816000) Stream added, broadcasting: 3 I0406 21:21:37.809850 6 log.go:172] (0xc0029b8420) Reply frame received for 3 I0406 21:21:37.809898 6 log.go:172] (0xc0029b8420) (0xc00292e320) Create stream I0406 21:21:37.809918 6 log.go:172] (0xc0029b8420) (0xc00292e320) Stream added, broadcasting: 5 I0406 21:21:37.811226 6 log.go:172] (0xc0029b8420) Reply frame received for 5 I0406 21:21:37.872818 6 log.go:172] (0xc0029b8420) Data frame received for 3 I0406 21:21:37.872848 6 log.go:172] (0xc002816000) (3) Data frame handling I0406 21:21:37.872869 6 log.go:172] (0xc002816000) (3) Data frame sent I0406 21:21:37.874021 6 log.go:172] (0xc0029b8420) Data frame received for 3 I0406 21:21:37.874051 6 log.go:172] (0xc002816000) (3) Data frame handling I0406 21:21:37.874085 6 log.go:172] (0xc0029b8420) Data frame received for 5 I0406 21:21:37.874127 6 log.go:172] (0xc00292e320) (5) Data frame handling I0406 21:21:37.876055 6 log.go:172] (0xc0029b8420) Data frame received for 1 I0406 21:21:37.876087 6 log.go:172] (0xc002730640) (1) Data frame handling I0406 21:21:37.876115 6 log.go:172] (0xc002730640) (1) Data frame sent I0406 21:21:37.876140 6 log.go:172] (0xc0029b8420) (0xc002730640) Stream removed, broadcasting: 1 I0406 21:21:37.876163 6 log.go:172] (0xc0029b8420) Go away received I0406 21:21:37.876300 6 log.go:172] (0xc0029b8420) (0xc002730640) Stream removed, broadcasting: 1 I0406 21:21:37.876338 6 log.go:172] (0xc0029b8420) (0xc002816000) Stream removed, broadcasting: 3 I0406 21:21:37.876366 6 log.go:172] (0xc0029b8420) (0xc00292e320) Stream removed, broadcasting: 5 Apr 6 21:21:37.876: INFO: Waiting for responses: map[] Apr 6 21:21:37.880: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:8080/dial?request=hostname&protocol=udp&host=10.244.2.226&port=8081&tries=1'] Namespace:pod-network-test-9167 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:21:37.880: INFO: >>> kubeConfig: /root/.kube/config I0406 21:21:37.915234 6 log.go:172] (0xc001d464d0) (0xc001e903c0) Create stream I0406 21:21:37.915259 6 log.go:172] (0xc001d464d0) (0xc001e903c0) Stream added, broadcasting: 1 I0406 21:21:37.917220 6 log.go:172] (0xc001d464d0) Reply frame received for 1 I0406 21:21:37.917282 6 log.go:172] (0xc001d464d0) (0xc00292e3c0) Create stream I0406 21:21:37.917290 6 log.go:172] (0xc001d464d0) (0xc00292e3c0) Stream added, broadcasting: 3 I0406 21:21:37.918242 6 log.go:172] (0xc001d464d0) Reply frame received for 3 I0406 21:21:37.918277 6 log.go:172] (0xc001d464d0) (0xc0027306e0) Create stream I0406 21:21:37.918290 6 log.go:172] (0xc001d464d0) (0xc0027306e0) Stream added, broadcasting: 5 I0406 21:21:37.919112 6 log.go:172] (0xc001d464d0) Reply frame received for 5 I0406 21:21:37.990055 6 log.go:172] (0xc001d464d0) Data frame received for 3 I0406 21:21:37.990087 6 log.go:172] (0xc00292e3c0) (3) Data frame handling I0406 21:21:37.990124 6 log.go:172] (0xc00292e3c0) (3) Data frame sent I0406 21:21:37.990520 6 log.go:172] (0xc001d464d0) Data frame received for 3 I0406 21:21:37.990558 6 log.go:172] (0xc00292e3c0) (3) Data frame handling I0406 21:21:37.990719 6 log.go:172] (0xc001d464d0) Data frame received for 5 I0406 21:21:37.990747 6 log.go:172] (0xc0027306e0) (5) Data frame handling I0406 21:21:37.992204 6 log.go:172] (0xc001d464d0) Data frame received for 1 I0406 21:21:37.992252 6 log.go:172] (0xc001e903c0) (1) Data frame handling I0406 21:21:37.992299 6 log.go:172] (0xc001e903c0) (1) Data frame sent I0406 21:21:37.992592 6 log.go:172] (0xc001d464d0) (0xc001e903c0) Stream removed, broadcasting: 1 I0406 21:21:37.992704 6 log.go:172] (0xc001d464d0) (0xc001e903c0) Stream removed, broadcasting: 1 I0406 21:21:37.992735 6 log.go:172] (0xc001d464d0) (0xc00292e3c0) Stream removed, broadcasting: 3 I0406 21:21:37.992915 6 log.go:172] (0xc001d464d0) Go away received I0406 21:21:37.993409 6 log.go:172] (0xc001d464d0) (0xc0027306e0) Stream removed, broadcasting: 5 Apr 6 21:21:37.993: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:21:37.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9167" for this suite. • [SLOW TEST:28.436 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":749,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:21:38.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 6 21:21:38.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-resource-version 62b18146-ccd7-4fc9-862f-a8954057ae58 5973604 0 2020-04-06 21:21:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 6 21:21:38.106: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-resource-version 62b18146-ccd7-4fc9-862f-a8954057ae58 5973605 0 2020-04-06 21:21:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:21:38.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2803" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":50,"skipped":763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:21:38.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-f03bc9f7-aad8-43e7-af9e-fb22ee335402 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f03bc9f7-aad8-43e7-af9e-fb22ee335402 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:21:44.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4487" for this suite. • [SLOW TEST:6.322 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":813,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:21:44.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 6 21:21:44.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9512' Apr 6 21:21:45.015: INFO: stderr: "" Apr 6 21:21:45.015: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 6 21:21:45.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9512' Apr 6 21:21:45.127: INFO: stderr: "" Apr 6 21:21:45.127: INFO: stdout: "update-demo-nautilus-7njl2 update-demo-nautilus-nw8p5 " Apr 6 21:21:45.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:45.232: INFO: stderr: "" Apr 6 21:21:45.232: INFO: stdout: "" Apr 6 21:21:45.232: INFO: update-demo-nautilus-7njl2 is created but not running Apr 6 21:21:50.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9512' Apr 6 21:21:50.346: INFO: stderr: "" Apr 6 21:21:50.346: INFO: stdout: "update-demo-nautilus-7njl2 update-demo-nautilus-nw8p5 " Apr 6 21:21:50.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:50.454: INFO: stderr: "" Apr 6 21:21:50.454: INFO: stdout: "true" Apr 6 21:21:50.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:50.544: INFO: stderr: "" Apr 6 21:21:50.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:21:50.544: INFO: validating pod update-demo-nautilus-7njl2 Apr 6 21:21:50.548: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:21:50.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:21:50.548: INFO: update-demo-nautilus-7njl2 is verified up and running Apr 6 21:21:50.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nw8p5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:50.643: INFO: stderr: "" Apr 6 21:21:50.643: INFO: stdout: "true" Apr 6 21:21:50.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nw8p5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:50.730: INFO: stderr: "" Apr 6 21:21:50.730: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:21:50.730: INFO: validating pod update-demo-nautilus-nw8p5 Apr 6 21:21:50.734: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:21:50.735: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:21:50.735: INFO: update-demo-nautilus-nw8p5 is verified up and running STEP: scaling down the replication controller Apr 6 21:21:50.737: INFO: scanned /root for discovery docs: Apr 6 21:21:50.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9512' Apr 6 21:21:51.868: INFO: stderr: "" Apr 6 21:21:51.869: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 6 21:21:51.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9512' Apr 6 21:21:51.965: INFO: stderr: "" Apr 6 21:21:51.965: INFO: stdout: "update-demo-nautilus-7njl2 update-demo-nautilus-nw8p5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 6 21:21:56.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9512' Apr 6 21:21:57.075: INFO: stderr: "" Apr 6 21:21:57.075: INFO: stdout: "update-demo-nautilus-7njl2 " Apr 6 21:21:57.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:57.175: INFO: stderr: "" Apr 6 21:21:57.175: INFO: stdout: "true" Apr 6 21:21:57.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:57.278: INFO: stderr: "" Apr 6 21:21:57.278: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:21:57.278: INFO: validating pod update-demo-nautilus-7njl2 Apr 6 21:21:57.281: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:21:57.281: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:21:57.281: INFO: update-demo-nautilus-7njl2 is verified up and running STEP: scaling up the replication controller Apr 6 21:21:57.283: INFO: scanned /root for discovery docs: Apr 6 21:21:57.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9512' Apr 6 21:21:58.416: INFO: stderr: "" Apr 6 21:21:58.416: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 6 21:21:58.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9512' Apr 6 21:21:58.510: INFO: stderr: "" Apr 6 21:21:58.510: INFO: stdout: "update-demo-nautilus-7njl2 update-demo-nautilus-8488k " Apr 6 21:21:58.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:58.596: INFO: stderr: "" Apr 6 21:21:58.596: INFO: stdout: "true" Apr 6 21:21:58.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:58.684: INFO: stderr: "" Apr 6 21:21:58.684: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:21:58.684: INFO: validating pod update-demo-nautilus-7njl2 Apr 6 21:21:58.687: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:21:58.687: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:21:58.687: INFO: update-demo-nautilus-7njl2 is verified up and running Apr 6 21:21:58.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8488k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:21:58.998: INFO: stderr: "" Apr 6 21:21:58.998: INFO: stdout: "" Apr 6 21:21:58.998: INFO: update-demo-nautilus-8488k is created but not running Apr 6 21:22:03.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9512' Apr 6 21:22:04.102: INFO: stderr: "" Apr 6 21:22:04.102: INFO: stdout: "update-demo-nautilus-7njl2 update-demo-nautilus-8488k " Apr 6 21:22:04.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:22:04.199: INFO: stderr: "" Apr 6 21:22:04.199: INFO: stdout: "true" Apr 6 21:22:04.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7njl2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:22:04.295: INFO: stderr: "" Apr 6 21:22:04.295: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:22:04.295: INFO: validating pod update-demo-nautilus-7njl2 Apr 6 21:22:04.298: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:22:04.298: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:22:04.298: INFO: update-demo-nautilus-7njl2 is verified up and running Apr 6 21:22:04.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8488k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:22:04.379: INFO: stderr: "" Apr 6 21:22:04.379: INFO: stdout: "true" Apr 6 21:22:04.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8488k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9512' Apr 6 21:22:04.472: INFO: stderr: "" Apr 6 21:22:04.472: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:22:04.472: INFO: validating pod update-demo-nautilus-8488k Apr 6 21:22:04.476: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:22:04.476: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:22:04.476: INFO: update-demo-nautilus-8488k is verified up and running STEP: using delete to clean up resources Apr 6 21:22:04.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9512' Apr 6 21:22:04.568: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 21:22:04.568: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 6 21:22:04.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9512' Apr 6 21:22:04.661: INFO: stderr: "No resources found in kubectl-9512 namespace.\n" Apr 6 21:22:04.661: INFO: stdout: "" Apr 6 21:22:04.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9512 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 6 21:22:04.767: INFO: stderr: "" Apr 6 21:22:04.767: INFO: stdout: "update-demo-nautilus-7njl2\nupdate-demo-nautilus-8488k\n" Apr 6 21:22:05.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9512' Apr 6 21:22:05.361: INFO: stderr: "No resources found in kubectl-9512 namespace.\n" Apr 6 21:22:05.361: INFO: stdout: "" Apr 6 21:22:05.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9512 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 6 21:22:05.461: INFO: stderr: "" Apr 6 21:22:05.461: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:22:05.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9512" for this suite. • [SLOW TEST:21.034 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":52,"skipped":827,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:22:05.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 6 21:22:05.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4422' Apr 6 21:22:05.740: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 6 21:22:05.740: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 6 21:22:05.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4422' Apr 6 21:22:05.872: INFO: stderr: "" Apr 6 21:22:05.872: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:22:05.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4422" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":53,"skipped":843,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:22:05.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 6 21:22:05.981: INFO: Waiting up to 5m0s for pod "pod-de86377e-60bd-4833-80be-33d762b581be" in namespace "emptydir-7914" to be "success or failure" Apr 6 21:22:05.996: INFO: Pod "pod-de86377e-60bd-4833-80be-33d762b581be": Phase="Pending", Reason="", readiness=false. Elapsed: 14.212549ms Apr 6 21:22:08.033: INFO: Pod "pod-de86377e-60bd-4833-80be-33d762b581be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051972444s Apr 6 21:22:10.037: INFO: Pod "pod-de86377e-60bd-4833-80be-33d762b581be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055525279s Apr 6 21:22:12.041: INFO: Pod "pod-de86377e-60bd-4833-80be-33d762b581be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0600031s STEP: Saw pod success Apr 6 21:22:12.042: INFO: Pod "pod-de86377e-60bd-4833-80be-33d762b581be" satisfied condition "success or failure" Apr 6 21:22:12.044: INFO: Trying to get logs from node jerma-worker pod pod-de86377e-60bd-4833-80be-33d762b581be container test-container: STEP: delete the pod Apr 6 21:22:12.160: INFO: Waiting for pod pod-de86377e-60bd-4833-80be-33d762b581be to disappear Apr 6 21:22:12.172: INFO: Pod pod-de86377e-60bd-4833-80be-33d762b581be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:22:12.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7914" for this suite. • [SLOW TEST:6.279 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:22:12.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:22:12.223: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 6 21:22:15.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2282 create -f -' Apr 6 21:22:18.003: INFO: stderr: "" Apr 6 21:22:18.003: INFO: stdout: "e2e-test-crd-publish-openapi-5725-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 6 21:22:18.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2282 delete e2e-test-crd-publish-openapi-5725-crds test-cr' Apr 6 21:22:18.110: INFO: stderr: "" Apr 6 21:22:18.110: INFO: stdout: "e2e-test-crd-publish-openapi-5725-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 6 21:22:18.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2282 apply -f -' Apr 6 21:22:18.357: INFO: stderr: "" Apr 6 21:22:18.357: INFO: stdout: "e2e-test-crd-publish-openapi-5725-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 6 21:22:18.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2282 delete e2e-test-crd-publish-openapi-5725-crds test-cr' Apr 6 21:22:18.476: INFO: stderr: "" Apr 6 21:22:18.476: INFO: stdout: "e2e-test-crd-publish-openapi-5725-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 6 21:22:18.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5725-crds' Apr 6 21:22:18.711: INFO: stderr: "" Apr 6 21:22:18.711: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5725-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:22:21.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2282" for this suite. • [SLOW TEST:9.430 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":55,"skipped":899,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:22:21.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 6 21:22:21.686: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 6 21:22:21.708: INFO: Waiting for terminating namespaces to be deleted... Apr 6 21:22:21.711: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 6 21:22:21.716: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:22:21.716: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:22:21.716: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:22:21.716: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:22:21.716: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 6 21:22:21.721: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:22:21.721: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:22:21.721: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 6 21:22:21.721: INFO: Container kube-bench ready: false, restart count 0 Apr 6 21:22:21.721: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:22:21.722: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:22:21.722: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 6 21:22:21.722: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 6 21:22:21.810: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 6 21:22:21.810: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 6 21:22:21.810: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 6 21:22:21.810: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 6 21:22:21.810: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 6 21:22:21.816: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-b68661ad-bfef-42fd-b25b-36545f0e8ddb.160357e6a1d08bc9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6964/filler-pod-b68661ad-bfef-42fd-b25b-36545f0e8ddb to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-b68661ad-bfef-42fd-b25b-36545f0e8ddb.160357e6ef35f276], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b68661ad-bfef-42fd-b25b-36545f0e8ddb.160357e7256f311a], Reason = [Created], Message = [Created container filler-pod-b68661ad-bfef-42fd-b25b-36545f0e8ddb] STEP: Considering event: Type = [Normal], Name = [filler-pod-b68661ad-bfef-42fd-b25b-36545f0e8ddb.160357e73f210c84], Reason = [Started], Message = [Started container filler-pod-b68661ad-bfef-42fd-b25b-36545f0e8ddb] STEP: Considering event: Type = [Normal], Name = [filler-pod-d346a371-d44a-49f9-a1e4-3733dc578f39.160357e6a29ffa92], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6964/filler-pod-d346a371-d44a-49f9-a1e4-3733dc578f39 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d346a371-d44a-49f9-a1e4-3733dc578f39.160357e727dee363], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d346a371-d44a-49f9-a1e4-3733dc578f39.160357e7539b86d4], Reason = [Created], Message = [Created container filler-pod-d346a371-d44a-49f9-a1e4-3733dc578f39] STEP: Considering event: Type = [Normal], Name = [filler-pod-d346a371-d44a-49f9-a1e4-3733dc578f39.160357e760c2cb08], Reason = [Started], Message = [Started container filler-pod-d346a371-d44a-49f9-a1e4-3733dc578f39] STEP: Considering event: Type = [Warning], Name = [additional-pod.160357e7920ede4b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:22:26.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6964" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.354 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":56,"skipped":916,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:22:26.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 6 21:22:35.084: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:35.105: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:37.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:37.114: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:39.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:39.119: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:41.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:41.109: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:43.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:43.109: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:45.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:45.109: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:47.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:47.110: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:49.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:49.109: INFO: Pod pod-with-prestop-http-hook still exists Apr 6 21:22:51.105: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 6 21:22:51.109: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:22:51.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5434" for this suite. • [SLOW TEST:24.161 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":919,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:22:51.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:23:22.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4897" for this suite. STEP: Destroying namespace "nsdeletetest-8175" for this suite. Apr 6 21:23:22.353: INFO: Namespace nsdeletetest-8175 was already deleted STEP: Destroying namespace "nsdeletetest-3819" for this suite. • [SLOW TEST:31.231 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":58,"skipped":927,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:23:22.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 6 21:23:22.409: INFO: PodSpec: initContainers in spec.initContainers Apr 6 21:24:09.655: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e685b1e8-7b13-4460-839c-ba8eca30557c", GenerateName:"", Namespace:"init-container-7216", SelfLink:"/api/v1/namespaces/init-container-7216/pods/pod-init-e685b1e8-7b13-4460-839c-ba8eca30557c", UID:"967b2fd3-2ac6-4044-828e-9bc981b9d793", ResourceVersion:"5974430", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721805002, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"409761687"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lnlg9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0058d0cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lnlg9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lnlg9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lnlg9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003c6a508), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c2cfc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c6a5c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c6a5e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003c6a5e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003c6a5ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805002, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805002, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805002, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805002, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.233", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.233"}}, StartTime:(*v1.Time)(0xc0035af0e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0035af120), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00276f110)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1e3132978552eee94d72859bdd1dcfa63795f3dbf9fb746ef5c5a7a9282592f6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035af140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035af100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003c6a6df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:24:09.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7216" for this suite. • [SLOW TEST:47.320 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":59,"skipped":928,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:24:09.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 6 21:24:10.495: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 6 21:24:12.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805050, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805050, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805050, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805050, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:24:15.591: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:24:15.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:24:16.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4956" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.226 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":60,"skipped":939,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:24:16.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:24:17.983: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:24:19.994: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805058, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805058, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805058, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805057, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:24:23.019: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:24:23.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:24:24.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8382" for this suite. STEP: Destroying namespace "webhook-8382-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.349 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":61,"skipped":944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:24:24.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1775 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-1775 Apr 6 21:24:24.507: INFO: Found 0 stateful pods, waiting for 1 Apr 6 21:24:34.513: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 6 21:24:34.535: INFO: Deleting all statefulset in ns statefulset-1775 Apr 6 21:24:34.556: INFO: Scaling statefulset ss to 0 Apr 6 21:24:54.638: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 21:24:54.641: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:24:54.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1775" for this suite. • [SLOW TEST:30.409 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":62,"skipped":974,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:24:54.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:25:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4316" for this suite. • [SLOW TEST:27.586 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:25:22.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-cdc19a7f-bba4-4ff2-a44f-6751ac6d081f STEP: Creating a pod to test consume configMaps Apr 6 21:25:22.313: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba" in namespace "configmap-5779" to be "success or failure" Apr 6 21:25:22.317: INFO: Pod "pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523787ms Apr 6 21:25:24.321: INFO: Pod "pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007464579s Apr 6 21:25:26.325: INFO: Pod "pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011541502s STEP: Saw pod success Apr 6 21:25:26.325: INFO: Pod "pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba" satisfied condition "success or failure" Apr 6 21:25:26.328: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba container configmap-volume-test: STEP: delete the pod Apr 6 21:25:26.371: INFO: Waiting for pod pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba to disappear Apr 6 21:25:26.383: INFO: Pod pod-configmaps-e3357c66-4a16-4c5a-b96a-fe3412ecdcba no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:25:26.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5779" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1015,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:25:26.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 6 21:25:26.463: INFO: Waiting up to 5m0s for pod "pod-7389aa88-45ab-469f-b581-f496876ca1bd" in namespace "emptydir-2903" to be "success or failure" Apr 6 21:25:26.473: INFO: Pod "pod-7389aa88-45ab-469f-b581-f496876ca1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.92646ms Apr 6 21:25:28.477: INFO: Pod "pod-7389aa88-45ab-469f-b581-f496876ca1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01390781s Apr 6 21:25:30.481: INFO: Pod "pod-7389aa88-45ab-469f-b581-f496876ca1bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018003085s STEP: Saw pod success Apr 6 21:25:30.481: INFO: Pod "pod-7389aa88-45ab-469f-b581-f496876ca1bd" satisfied condition "success or failure" Apr 6 21:25:30.484: INFO: Trying to get logs from node jerma-worker pod pod-7389aa88-45ab-469f-b581-f496876ca1bd container test-container: STEP: delete the pod Apr 6 21:25:30.520: INFO: Waiting for pod pod-7389aa88-45ab-469f-b581-f496876ca1bd to disappear Apr 6 21:25:30.530: INFO: Pod pod-7389aa88-45ab-469f-b581-f496876ca1bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:25:30.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2903" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1035,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:25:30.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0406 21:25:31.716029 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 6 21:25:31.716: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:25:31.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-139" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":66,"skipped":1057,"failed":0} SSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:25:31.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:25:31.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5415" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":67,"skipped":1060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:25:31.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 6 21:25:31.874: INFO: Waiting up to 5m0s for pod "pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5" in namespace "emptydir-5439" to be "success or failure" Apr 6 21:25:31.884: INFO: Pod "pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.925218ms Apr 6 21:25:33.904: INFO: Pod "pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029842536s Apr 6 21:25:35.922: INFO: Pod "pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047966461s STEP: Saw pod success Apr 6 21:25:35.922: INFO: Pod "pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5" satisfied condition "success or failure" Apr 6 21:25:35.926: INFO: Trying to get logs from node jerma-worker2 pod pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5 container test-container: STEP: delete the pod Apr 6 21:25:35.982: INFO: Waiting for pod pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5 to disappear Apr 6 21:25:35.994: INFO: Pod pod-e15f320f-2378-41a2-8a82-0bff9adcf5b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:25:35.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5439" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1100,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:25:36.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:25:52.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-690" for this suite. • [SLOW TEST:16.413 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":69,"skipped":1107,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:25:52.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-w4x5 STEP: Creating a pod to test atomic-volume-subpath Apr 6 21:25:52.513: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w4x5" in namespace "subpath-7094" to be "success or failure" Apr 6 21:25:52.525: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.22439ms Apr 6 21:25:54.576: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063042342s Apr 6 21:25:56.581: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 4.067550531s Apr 6 21:25:58.584: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 6.070813189s Apr 6 21:26:00.587: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 8.073367988s Apr 6 21:26:02.592: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 10.078415348s Apr 6 21:26:04.596: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 12.082759512s Apr 6 21:26:06.605: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 14.091418739s Apr 6 21:26:08.609: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 16.095836723s Apr 6 21:26:10.612: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 18.098870983s Apr 6 21:26:12.616: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 20.10275521s Apr 6 21:26:14.620: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Running", Reason="", readiness=true. Elapsed: 22.107068663s Apr 6 21:26:16.624: INFO: Pod "pod-subpath-test-configmap-w4x5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.111259947s STEP: Saw pod success Apr 6 21:26:16.625: INFO: Pod "pod-subpath-test-configmap-w4x5" satisfied condition "success or failure" Apr 6 21:26:16.627: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-w4x5 container test-container-subpath-configmap-w4x5: STEP: delete the pod Apr 6 21:26:16.649: INFO: Waiting for pod pod-subpath-test-configmap-w4x5 to disappear Apr 6 21:26:16.654: INFO: Pod pod-subpath-test-configmap-w4x5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-w4x5 Apr 6 21:26:16.654: INFO: Deleting pod "pod-subpath-test-configmap-w4x5" in namespace "subpath-7094" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:26:16.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7094" for this suite. • [SLOW TEST:24.248 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":70,"skipped":1109,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:26:16.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:26:16.705: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 6 21:26:19.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 create -f -' Apr 6 21:26:22.598: INFO: stderr: "" Apr 6 21:26:22.598: INFO: stdout: "e2e-test-crd-publish-openapi-295-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 6 21:26:22.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 delete e2e-test-crd-publish-openapi-295-crds test-foo' Apr 6 21:26:22.725: INFO: stderr: "" Apr 6 21:26:22.725: INFO: stdout: "e2e-test-crd-publish-openapi-295-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 6 21:26:22.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 apply -f -' Apr 6 21:26:22.962: INFO: stderr: "" Apr 6 21:26:22.963: INFO: stdout: "e2e-test-crd-publish-openapi-295-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 6 21:26:22.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 delete e2e-test-crd-publish-openapi-295-crds test-foo' Apr 6 21:26:23.075: INFO: stderr: "" Apr 6 21:26:23.075: INFO: stdout: "e2e-test-crd-publish-openapi-295-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 6 21:26:23.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 create -f -' Apr 6 21:26:23.305: INFO: rc: 1 Apr 6 21:26:23.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 apply -f -' Apr 6 21:26:23.548: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 6 21:26:23.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 create -f -' Apr 6 21:26:23.771: INFO: rc: 1 Apr 6 21:26:23.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4244 apply -f -' Apr 6 21:26:23.990: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 6 21:26:23.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-295-crds' Apr 6 21:26:24.238: INFO: stderr: "" Apr 6 21:26:24.238: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-295-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 6 21:26:24.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-295-crds.metadata' Apr 6 21:26:24.495: INFO: stderr: "" Apr 6 21:26:24.495: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-295-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 6 21:26:24.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-295-crds.spec' Apr 6 21:26:24.734: INFO: stderr: "" Apr 6 21:26:24.734: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-295-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 6 21:26:24.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-295-crds.spec.bars' Apr 6 21:26:24.980: INFO: stderr: "" Apr 6 21:26:24.980: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-295-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 6 21:26:24.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-295-crds.spec.bars2' Apr 6 21:26:25.202: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:26:28.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4244" for this suite. • [SLOW TEST:11.459 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":71,"skipped":1110,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:26:28.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:26:28.827: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:26:30.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805188, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805188, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805188, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805188, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:26:33.875: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:26:33.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5441" for this suite. STEP: Destroying namespace "webhook-5441-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.861 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":72,"skipped":1117,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:26:33.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-8ccf821b-01b3-456e-9566-bd08967f6943 STEP: Creating a pod to test consume secrets Apr 6 21:26:34.037: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f" in namespace "projected-5377" to be "success or failure" Apr 6 21:26:34.055: INFO: Pod "pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.796668ms Apr 6 21:26:36.114: INFO: Pod "pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077518107s Apr 6 21:26:38.118: INFO: Pod "pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080895348s STEP: Saw pod success Apr 6 21:26:38.118: INFO: Pod "pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f" satisfied condition "success or failure" Apr 6 21:26:38.120: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f container projected-secret-volume-test: STEP: delete the pod Apr 6 21:26:38.154: INFO: Waiting for pod pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f to disappear Apr 6 21:26:38.167: INFO: Pod pod-projected-secrets-ca14c025-a1c5-4bbc-94fd-a3428230a11f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:26:38.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5377" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:26:38.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-b56f7b49-7633-409b-ac9a-71291e42205e in namespace container-probe-6905 Apr 6 21:26:42.274: INFO: Started pod test-webserver-b56f7b49-7633-409b-ac9a-71291e42205e in namespace container-probe-6905 STEP: checking the pod's current state and verifying that restartCount is present Apr 6 21:26:42.277: INFO: Initial restart count of pod test-webserver-b56f7b49-7633-409b-ac9a-71291e42205e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:30:42.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6905" for this suite. • [SLOW TEST:244.794 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1159,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:30:42.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:30:43.700: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:30:45.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805443, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805443, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805443, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805443, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:30:48.755: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 6 21:30:52.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3750 to-be-attached-pod -i -c=container1' Apr 6 21:30:52.942: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:30:52.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3750" for this suite. STEP: Destroying namespace "webhook-3750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.062 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":75,"skipped":1166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:30:53.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:30:53.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-956" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:30:53.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-db553df5-4302-46c9-9275-93df679f3cf6 STEP: Creating a pod to test consume secrets Apr 6 21:30:53.295: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055" in namespace "projected-6397" to be "success or failure" Apr 6 21:30:53.323: INFO: Pod "pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055": Phase="Pending", Reason="", readiness=false. Elapsed: 27.689444ms Apr 6 21:30:55.327: INFO: Pod "pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031962291s Apr 6 21:30:57.332: INFO: Pod "pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036542015s STEP: Saw pod success Apr 6 21:30:57.332: INFO: Pod "pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055" satisfied condition "success or failure" Apr 6 21:30:57.341: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055 container projected-secret-volume-test: STEP: delete the pod Apr 6 21:30:57.400: INFO: Waiting for pod pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055 to disappear Apr 6 21:30:57.412: INFO: Pod pod-projected-secrets-07058438-e516-4702-8cf4-6d58eaca8055 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:30:57.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6397" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1292,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:30:57.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 6 21:30:57.486: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix018068699/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:30:57.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8252" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":78,"skipped":1311,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:30:57.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 6 21:30:57.659: INFO: >>> kubeConfig: /root/.kube/config Apr 6 21:31:00.608: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:31:11.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8953" for this suite. • [SLOW TEST:13.612 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":79,"skipped":1313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:31:11.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-4895 STEP: creating replication controller nodeport-test in namespace services-4895 I0406 21:31:11.326726 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-4895, replica count: 2 I0406 21:31:14.378106 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:31:17.378381 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 6 21:31:17.378: INFO: Creating new exec pod Apr 6 21:31:22.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4895 execpoddlkn9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 6 21:31:22.637: INFO: stderr: "I0406 21:31:22.540045 1562 log.go:172] (0xc0009466e0) (0xc00091c000) Create stream\nI0406 21:31:22.540107 1562 log.go:172] (0xc0009466e0) (0xc00091c000) Stream added, broadcasting: 1\nI0406 21:31:22.543091 1562 log.go:172] (0xc0009466e0) Reply frame received for 1\nI0406 21:31:22.543150 1562 log.go:172] (0xc0009466e0) (0xc00091c0a0) Create stream\nI0406 21:31:22.543184 1562 log.go:172] (0xc0009466e0) (0xc00091c0a0) Stream added, broadcasting: 3\nI0406 21:31:22.544387 1562 log.go:172] (0xc0009466e0) Reply frame received for 3\nI0406 21:31:22.544451 1562 log.go:172] (0xc0009466e0) (0xc000a18000) Create stream\nI0406 21:31:22.544486 1562 log.go:172] (0xc0009466e0) (0xc000a18000) Stream added, broadcasting: 5\nI0406 21:31:22.545780 1562 log.go:172] (0xc0009466e0) Reply frame received for 5\nI0406 21:31:22.625870 1562 log.go:172] (0xc0009466e0) Data frame received for 5\nI0406 21:31:22.625900 1562 log.go:172] (0xc000a18000) (5) Data frame handling\nI0406 21:31:22.625915 1562 log.go:172] (0xc000a18000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0406 21:31:22.626136 1562 log.go:172] (0xc0009466e0) Data frame received for 5\nI0406 21:31:22.626147 1562 log.go:172] (0xc000a18000) (5) Data frame handling\nI0406 21:31:22.626160 1562 log.go:172] (0xc000a18000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0406 21:31:22.626473 1562 log.go:172] (0xc0009466e0) Data frame received for 3\nI0406 21:31:22.626489 1562 log.go:172] (0xc00091c0a0) (3) Data frame handling\nI0406 21:31:22.626553 1562 log.go:172] (0xc0009466e0) Data frame received for 5\nI0406 21:31:22.626569 1562 log.go:172] (0xc000a18000) (5) Data frame handling\nI0406 21:31:22.629031 1562 log.go:172] (0xc0009466e0) Data frame received for 1\nI0406 21:31:22.629065 1562 log.go:172] (0xc00091c000) (1) Data frame handling\nI0406 21:31:22.629088 1562 log.go:172] (0xc00091c000) (1) Data frame sent\nI0406 21:31:22.630824 1562 log.go:172] (0xc0009466e0) (0xc00091c000) Stream removed, broadcasting: 1\nI0406 21:31:22.631330 1562 log.go:172] (0xc0009466e0) (0xc00091c000) Stream removed, broadcasting: 1\nI0406 21:31:22.631365 1562 log.go:172] (0xc0009466e0) (0xc00091c0a0) Stream removed, broadcasting: 3\nI0406 21:31:22.631384 1562 log.go:172] (0xc0009466e0) (0xc000a18000) Stream removed, broadcasting: 5\n" Apr 6 21:31:22.637: INFO: stdout: "" Apr 6 21:31:22.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4895 execpoddlkn9 -- /bin/sh -x -c nc -zv -t -w 2 10.109.44.142 80' Apr 6 21:31:22.846: INFO: stderr: "I0406 21:31:22.758923 1582 log.go:172] (0xc0000f58c0) (0xc0009e83c0) Create stream\nI0406 21:31:22.758991 1582 log.go:172] (0xc0000f58c0) (0xc0009e83c0) Stream added, broadcasting: 1\nI0406 21:31:22.763654 1582 log.go:172] (0xc0000f58c0) Reply frame received for 1\nI0406 21:31:22.763693 1582 log.go:172] (0xc0000f58c0) (0xc0004e65a0) Create stream\nI0406 21:31:22.763702 1582 log.go:172] (0xc0000f58c0) (0xc0004e65a0) Stream added, broadcasting: 3\nI0406 21:31:22.764573 1582 log.go:172] (0xc0000f58c0) Reply frame received for 3\nI0406 21:31:22.764606 1582 log.go:172] (0xc0000f58c0) (0xc000542b40) Create stream\nI0406 21:31:22.764616 1582 log.go:172] (0xc0000f58c0) (0xc000542b40) Stream added, broadcasting: 5\nI0406 21:31:22.765592 1582 log.go:172] (0xc0000f58c0) Reply frame received for 5\nI0406 21:31:22.840416 1582 log.go:172] (0xc0000f58c0) Data frame received for 5\nI0406 21:31:22.840460 1582 log.go:172] (0xc000542b40) (5) Data frame handling\nI0406 21:31:22.840486 1582 log.go:172] (0xc000542b40) (5) Data frame sent\nI0406 21:31:22.840496 1582 log.go:172] (0xc0000f58c0) Data frame received for 5\nI0406 21:31:22.840506 1582 log.go:172] (0xc000542b40) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.44.142 80\nConnection to 10.109.44.142 80 port [tcp/http] succeeded!\nI0406 21:31:22.840636 1582 log.go:172] (0xc0000f58c0) Data frame received for 3\nI0406 21:31:22.840682 1582 log.go:172] (0xc0004e65a0) (3) Data frame handling\nI0406 21:31:22.842296 1582 log.go:172] (0xc0000f58c0) Data frame received for 1\nI0406 21:31:22.842334 1582 log.go:172] (0xc0009e83c0) (1) Data frame handling\nI0406 21:31:22.842354 1582 log.go:172] (0xc0009e83c0) (1) Data frame sent\nI0406 21:31:22.842377 1582 log.go:172] (0xc0000f58c0) (0xc0009e83c0) Stream removed, broadcasting: 1\nI0406 21:31:22.842426 1582 log.go:172] (0xc0000f58c0) Go away received\nI0406 21:31:22.842805 1582 log.go:172] (0xc0000f58c0) (0xc0009e83c0) Stream removed, broadcasting: 1\nI0406 21:31:22.842825 1582 log.go:172] (0xc0000f58c0) (0xc0004e65a0) Stream removed, broadcasting: 3\nI0406 21:31:22.842834 1582 log.go:172] (0xc0000f58c0) (0xc000542b40) Stream removed, broadcasting: 5\n" Apr 6 21:31:22.847: INFO: stdout: "" Apr 6 21:31:22.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4895 execpoddlkn9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32413' Apr 6 21:31:23.052: INFO: stderr: "I0406 21:31:22.975075 1603 log.go:172] (0xc0000f4a50) (0xc0009560a0) Create stream\nI0406 21:31:22.975161 1603 log.go:172] (0xc0000f4a50) (0xc0009560a0) Stream added, broadcasting: 1\nI0406 21:31:22.983337 1603 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0406 21:31:22.983392 1603 log.go:172] (0xc0000f4a50) (0xc000956140) Create stream\nI0406 21:31:22.983405 1603 log.go:172] (0xc0000f4a50) (0xc000956140) Stream added, broadcasting: 3\nI0406 21:31:22.984992 1603 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0406 21:31:22.985041 1603 log.go:172] (0xc0000f4a50) (0xc00060e6e0) Create stream\nI0406 21:31:22.985061 1603 log.go:172] (0xc0000f4a50) (0xc00060e6e0) Stream added, broadcasting: 5\nI0406 21:31:22.987189 1603 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0406 21:31:23.045102 1603 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0406 21:31:23.045393 1603 log.go:172] (0xc00060e6e0) (5) Data frame handling\nI0406 21:31:23.045419 1603 log.go:172] (0xc00060e6e0) (5) Data frame sent\nI0406 21:31:23.045466 1603 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0406 21:31:23.045485 1603 log.go:172] (0xc00060e6e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32413\nConnection to 172.17.0.10 32413 port [tcp/32413] succeeded!\nI0406 21:31:23.045508 1603 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0406 21:31:23.045582 1603 log.go:172] (0xc000956140) (3) Data frame handling\nI0406 21:31:23.045622 1603 log.go:172] (0xc00060e6e0) (5) Data frame sent\nI0406 21:31:23.045754 1603 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0406 21:31:23.045781 1603 log.go:172] (0xc00060e6e0) (5) Data frame handling\nI0406 21:31:23.047530 1603 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0406 21:31:23.047555 1603 log.go:172] (0xc0009560a0) (1) Data frame handling\nI0406 21:31:23.047568 1603 log.go:172] (0xc0009560a0) (1) Data frame sent\nI0406 21:31:23.047587 1603 log.go:172] (0xc0000f4a50) (0xc0009560a0) Stream removed, broadcasting: 1\nI0406 21:31:23.047613 1603 log.go:172] (0xc0000f4a50) Go away received\nI0406 21:31:23.048090 1603 log.go:172] (0xc0000f4a50) (0xc0009560a0) Stream removed, broadcasting: 1\nI0406 21:31:23.048133 1603 log.go:172] (0xc0000f4a50) (0xc000956140) Stream removed, broadcasting: 3\nI0406 21:31:23.048147 1603 log.go:172] (0xc0000f4a50) (0xc00060e6e0) Stream removed, broadcasting: 5\n" Apr 6 21:31:23.052: INFO: stdout: "" Apr 6 21:31:23.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4895 execpoddlkn9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32413' Apr 6 21:31:23.259: INFO: stderr: "I0406 21:31:23.199171 1624 log.go:172] (0xc000208d10) (0xc0005808c0) Create stream\nI0406 21:31:23.199233 1624 log.go:172] (0xc000208d10) (0xc0005808c0) Stream added, broadcasting: 1\nI0406 21:31:23.201300 1624 log.go:172] (0xc000208d10) Reply frame received for 1\nI0406 21:31:23.201333 1624 log.go:172] (0xc000208d10) (0xc0007af5e0) Create stream\nI0406 21:31:23.201342 1624 log.go:172] (0xc000208d10) (0xc0007af5e0) Stream added, broadcasting: 3\nI0406 21:31:23.202268 1624 log.go:172] (0xc000208d10) Reply frame received for 3\nI0406 21:31:23.202302 1624 log.go:172] (0xc000208d10) (0xc000a96000) Create stream\nI0406 21:31:23.202312 1624 log.go:172] (0xc000208d10) (0xc000a96000) Stream added, broadcasting: 5\nI0406 21:31:23.203058 1624 log.go:172] (0xc000208d10) Reply frame received for 5\nI0406 21:31:23.252726 1624 log.go:172] (0xc000208d10) Data frame received for 3\nI0406 21:31:23.252781 1624 log.go:172] (0xc0007af5e0) (3) Data frame handling\nI0406 21:31:23.252812 1624 log.go:172] (0xc000208d10) Data frame received for 5\nI0406 21:31:23.252824 1624 log.go:172] (0xc000a96000) (5) Data frame handling\nI0406 21:31:23.252849 1624 log.go:172] (0xc000a96000) (5) Data frame sent\nI0406 21:31:23.252867 1624 log.go:172] (0xc000208d10) Data frame received for 5\nI0406 21:31:23.252883 1624 log.go:172] (0xc000a96000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32413\nConnection to 172.17.0.8 32413 port [tcp/32413] succeeded!\nI0406 21:31:23.254794 1624 log.go:172] (0xc000208d10) Data frame received for 1\nI0406 21:31:23.254822 1624 log.go:172] (0xc0005808c0) (1) Data frame handling\nI0406 21:31:23.254836 1624 log.go:172] (0xc0005808c0) (1) Data frame sent\nI0406 21:31:23.254849 1624 log.go:172] (0xc000208d10) (0xc0005808c0) Stream removed, broadcasting: 1\nI0406 21:31:23.254865 1624 log.go:172] (0xc000208d10) Go away received\nI0406 21:31:23.255187 1624 log.go:172] (0xc000208d10) (0xc0005808c0) Stream removed, broadcasting: 1\nI0406 21:31:23.255200 1624 log.go:172] (0xc000208d10) (0xc0007af5e0) Stream removed, broadcasting: 3\nI0406 21:31:23.255206 1624 log.go:172] (0xc000208d10) (0xc000a96000) Stream removed, broadcasting: 5\n" Apr 6 21:31:23.259: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:31:23.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4895" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.101 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":80,"skipped":1348,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:31:23.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 6 21:31:23.322: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 6 21:31:34.751: INFO: >>> kubeConfig: /root/.kube/config Apr 6 21:31:36.679: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:31:47.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8592" for this suite. • [SLOW TEST:23.922 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":81,"skipped":1354,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:31:47.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-80757158-a02b-49e2-89e0-b98da88f543c STEP: Creating a pod to test consume configMaps Apr 6 21:31:47.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff" in namespace "configmap-9472" to be "success or failure" Apr 6 21:31:47.324: INFO: Pod "pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.515765ms Apr 6 21:31:49.348: INFO: Pod "pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027520169s Apr 6 21:31:51.352: INFO: Pod "pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030906792s STEP: Saw pod success Apr 6 21:31:51.352: INFO: Pod "pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff" satisfied condition "success or failure" Apr 6 21:31:51.355: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff container configmap-volume-test: STEP: delete the pod Apr 6 21:31:51.385: INFO: Waiting for pod pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff to disappear Apr 6 21:31:51.390: INFO: Pod pod-configmaps-cf7cd26c-2432-4d94-a8c8-077708ee22ff no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:31:51.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9472" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1370,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:31:51.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:31:52.238: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:31:54.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805512, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805512, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805512, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805512, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:31:57.277: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:31:57.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9916-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:31:58.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-216" for this suite. STEP: Destroying namespace "webhook-216-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.198 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":83,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:31:58.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:31:58.657: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-02cd72e0-3184-49a0-bccb-da4dc8485539" in namespace "security-context-test-4261" to be "success or failure" Apr 6 21:31:58.660: INFO: Pod "busybox-readonly-false-02cd72e0-3184-49a0-bccb-da4dc8485539": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812724ms Apr 6 21:32:00.663: INFO: Pod "busybox-readonly-false-02cd72e0-3184-49a0-bccb-da4dc8485539": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006299814s Apr 6 21:32:02.667: INFO: Pod "busybox-readonly-false-02cd72e0-3184-49a0-bccb-da4dc8485539": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01021845s Apr 6 21:32:02.667: INFO: Pod "busybox-readonly-false-02cd72e0-3184-49a0-bccb-da4dc8485539" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:32:02.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4261" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1404,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:32:02.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4966 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 6 21:32:02.718: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 6 21:32:24.864: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.182:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4966 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:32:24.864: INFO: >>> kubeConfig: /root/.kube/config I0406 21:32:24.899446 6 log.go:172] (0xc0029b9550) (0xc002531f40) Create stream I0406 21:32:24.899486 6 log.go:172] (0xc0029b9550) (0xc002531f40) Stream added, broadcasting: 1 I0406 21:32:24.901619 6 log.go:172] (0xc0029b9550) Reply frame received for 1 I0406 21:32:24.901658 6 log.go:172] (0xc0029b9550) (0xc001e90000) Create stream I0406 21:32:24.901670 6 log.go:172] (0xc0029b9550) (0xc001e90000) Stream added, broadcasting: 3 I0406 21:32:24.902558 6 log.go:172] (0xc0029b9550) Reply frame received for 3 I0406 21:32:24.902591 6 log.go:172] (0xc0029b9550) (0xc00230f4a0) Create stream I0406 21:32:24.902607 6 log.go:172] (0xc0029b9550) (0xc00230f4a0) Stream added, broadcasting: 5 I0406 21:32:24.903677 6 log.go:172] (0xc0029b9550) Reply frame received for 5 I0406 21:32:24.975650 6 log.go:172] (0xc0029b9550) Data frame received for 3 I0406 21:32:24.975687 6 log.go:172] (0xc001e90000) (3) Data frame handling I0406 21:32:24.975702 6 log.go:172] (0xc001e90000) (3) Data frame sent I0406 21:32:24.975708 6 log.go:172] (0xc0029b9550) Data frame received for 3 I0406 21:32:24.975713 6 log.go:172] (0xc001e90000) (3) Data frame handling I0406 21:32:24.975822 6 log.go:172] (0xc0029b9550) Data frame received for 5 I0406 21:32:24.975833 6 log.go:172] (0xc00230f4a0) (5) Data frame handling I0406 21:32:24.978233 6 log.go:172] (0xc0029b9550) Data frame received for 1 I0406 21:32:24.978252 6 log.go:172] (0xc002531f40) (1) Data frame handling I0406 21:32:24.978265 6 log.go:172] (0xc002531f40) (1) Data frame sent I0406 21:32:24.978300 6 log.go:172] (0xc0029b9550) (0xc002531f40) Stream removed, broadcasting: 1 I0406 21:32:24.978322 6 log.go:172] (0xc0029b9550) Go away received I0406 21:32:24.978436 6 log.go:172] (0xc0029b9550) (0xc002531f40) Stream removed, broadcasting: 1 I0406 21:32:24.978468 6 log.go:172] (0xc0029b9550) (0xc001e90000) Stream removed, broadcasting: 3 I0406 21:32:24.978480 6 log.go:172] (0xc0029b9550) (0xc00230f4a0) Stream removed, broadcasting: 5 Apr 6 21:32:24.978: INFO: Found all expected endpoints: [netserver-0] Apr 6 21:32:24.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.246:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4966 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:32:24.982: INFO: >>> kubeConfig: /root/.kube/config I0406 21:32:25.014871 6 log.go:172] (0xc001d462c0) (0xc00288e280) Create stream I0406 21:32:25.014895 6 log.go:172] (0xc001d462c0) (0xc00288e280) Stream added, broadcasting: 1 I0406 21:32:25.017944 6 log.go:172] (0xc001d462c0) Reply frame received for 1 I0406 21:32:25.017990 6 log.go:172] (0xc001d462c0) (0xc00230f540) Create stream I0406 21:32:25.018009 6 log.go:172] (0xc001d462c0) (0xc00230f540) Stream added, broadcasting: 3 I0406 21:32:25.019085 6 log.go:172] (0xc001d462c0) Reply frame received for 3 I0406 21:32:25.019135 6 log.go:172] (0xc001d462c0) (0xc00230f5e0) Create stream I0406 21:32:25.019153 6 log.go:172] (0xc001d462c0) (0xc00230f5e0) Stream added, broadcasting: 5 I0406 21:32:25.020207 6 log.go:172] (0xc001d462c0) Reply frame received for 5 I0406 21:32:25.087353 6 log.go:172] (0xc001d462c0) Data frame received for 3 I0406 21:32:25.087389 6 log.go:172] (0xc00230f540) (3) Data frame handling I0406 21:32:25.087409 6 log.go:172] (0xc001d462c0) Data frame received for 5 I0406 21:32:25.087426 6 log.go:172] (0xc00230f5e0) (5) Data frame handling I0406 21:32:25.087443 6 log.go:172] (0xc00230f540) (3) Data frame sent I0406 21:32:25.087452 6 log.go:172] (0xc001d462c0) Data frame received for 3 I0406 21:32:25.087461 6 log.go:172] (0xc00230f540) (3) Data frame handling I0406 21:32:25.088849 6 log.go:172] (0xc001d462c0) Data frame received for 1 I0406 21:32:25.088860 6 log.go:172] (0xc00288e280) (1) Data frame handling I0406 21:32:25.088867 6 log.go:172] (0xc00288e280) (1) Data frame sent I0406 21:32:25.088880 6 log.go:172] (0xc001d462c0) (0xc00288e280) Stream removed, broadcasting: 1 I0406 21:32:25.088943 6 log.go:172] (0xc001d462c0) (0xc00288e280) Stream removed, broadcasting: 1 I0406 21:32:25.088956 6 log.go:172] (0xc001d462c0) (0xc00230f540) Stream removed, broadcasting: 3 I0406 21:32:25.089036 6 log.go:172] (0xc001d462c0) (0xc00230f5e0) Stream removed, broadcasting: 5 Apr 6 21:32:25.089: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:32:25.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4966" for this suite. • [SLOW TEST:22.419 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:32:25.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-4b936067-4599-4104-b978-63534bad23ef STEP: Creating a pod to test consume configMaps Apr 6 21:32:25.197: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129" in namespace "projected-1" to be "success or failure" Apr 6 21:32:25.200: INFO: Pod "pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129": Phase="Pending", Reason="", readiness=false. Elapsed: 3.39886ms Apr 6 21:32:27.204: INFO: Pod "pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007220845s Apr 6 21:32:29.209: INFO: Pod "pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01190555s STEP: Saw pod success Apr 6 21:32:29.209: INFO: Pod "pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129" satisfied condition "success or failure" Apr 6 21:32:29.211: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129 container projected-configmap-volume-test: STEP: delete the pod Apr 6 21:32:29.243: INFO: Waiting for pod pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129 to disappear Apr 6 21:32:29.248: INFO: Pod pod-projected-configmaps-f191d9c6-248d-4f8b-820b-eee60db4a129 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:32:29.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1453,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:32:29.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 6 21:32:29.340: INFO: Waiting up to 5m0s for pod "pod-77373c95-9567-4a25-b8c7-9278c878b092" in namespace "emptydir-1515" to be "success or failure" Apr 6 21:32:29.356: INFO: Pod "pod-77373c95-9567-4a25-b8c7-9278c878b092": Phase="Pending", Reason="", readiness=false. Elapsed: 16.137614ms Apr 6 21:32:31.382: INFO: Pod "pod-77373c95-9567-4a25-b8c7-9278c878b092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042157417s Apr 6 21:32:33.386: INFO: Pod "pod-77373c95-9567-4a25-b8c7-9278c878b092": Phase="Running", Reason="", readiness=true. Elapsed: 4.046241127s Apr 6 21:32:35.390: INFO: Pod "pod-77373c95-9567-4a25-b8c7-9278c878b092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050292073s STEP: Saw pod success Apr 6 21:32:35.390: INFO: Pod "pod-77373c95-9567-4a25-b8c7-9278c878b092" satisfied condition "success or failure" Apr 6 21:32:35.394: INFO: Trying to get logs from node jerma-worker pod pod-77373c95-9567-4a25-b8c7-9278c878b092 container test-container: STEP: delete the pod Apr 6 21:32:35.456: INFO: Waiting for pod pod-77373c95-9567-4a25-b8c7-9278c878b092 to disappear Apr 6 21:32:35.463: INFO: Pod pod-77373c95-9567-4a25-b8c7-9278c878b092 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:32:35.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1515" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1459,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:32:35.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 6 21:32:36.103: INFO: created pod pod-service-account-defaultsa Apr 6 21:32:36.103: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 6 21:32:36.111: INFO: created pod pod-service-account-mountsa Apr 6 21:32:36.111: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 6 21:32:36.137: INFO: created pod pod-service-account-nomountsa Apr 6 21:32:36.137: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 6 21:32:36.160: INFO: created pod pod-service-account-defaultsa-mountspec Apr 6 21:32:36.160: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 6 21:32:36.194: INFO: created pod pod-service-account-mountsa-mountspec Apr 6 21:32:36.194: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 6 21:32:36.203: INFO: created pod pod-service-account-nomountsa-mountspec Apr 6 21:32:36.203: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 6 21:32:36.221: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 6 21:32:36.221: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 6 21:32:36.236: INFO: created pod pod-service-account-mountsa-nomountspec Apr 6 21:32:36.236: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 6 21:32:36.287: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 6 21:32:36.287: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:32:36.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5595" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":88,"skipped":1470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:32:36.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:32:53.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9852" for this suite. • [SLOW TEST:17.272 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":89,"skipped":1501,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:32:53.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 6 21:32:53.702: INFO: Waiting up to 5m0s for pod "downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af" in namespace "downward-api-7968" to be "success or failure" Apr 6 21:32:53.710: INFO: Pod "downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af": Phase="Pending", Reason="", readiness=false. Elapsed: 7.636156ms Apr 6 21:32:55.719: INFO: Pod "downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016833267s Apr 6 21:32:57.723: INFO: Pod "downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020839096s STEP: Saw pod success Apr 6 21:32:57.723: INFO: Pod "downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af" satisfied condition "success or failure" Apr 6 21:32:57.726: INFO: Trying to get logs from node jerma-worker pod downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af container dapi-container: STEP: delete the pod Apr 6 21:32:57.751: INFO: Waiting for pod downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af to disappear Apr 6 21:32:57.764: INFO: Pod downward-api-7e63646e-7a80-4805-acd6-c2fc395cf8af no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:32:57.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7968" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1504,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:32:57.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4959/secret-test-34f41818-2b9c-438f-b918-d6606d955dbc STEP: Creating a pod to test consume secrets Apr 6 21:32:57.880: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd" in namespace "secrets-4959" to be "success or failure" Apr 6 21:32:57.915: INFO: Pod "pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd": Phase="Pending", Reason="", readiness=false. Elapsed: 35.27322ms Apr 6 21:32:59.919: INFO: Pod "pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038892839s Apr 6 21:33:01.922: INFO: Pod "pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042223069s STEP: Saw pod success Apr 6 21:33:01.922: INFO: Pod "pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd" satisfied condition "success or failure" Apr 6 21:33:01.926: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd container env-test: STEP: delete the pod Apr 6 21:33:01.963: INFO: Waiting for pod pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd to disappear Apr 6 21:33:01.966: INFO: Pod pod-configmaps-5cfe5ae8-3514-4ebe-912b-9b4aa7c862fd no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:33:01.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4959" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1526,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:33:01.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 6 21:33:06.110: INFO: Pod pod-hostip-8bed8c31-83d5-4580-b461-e967ae023efc has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:33:06.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6092" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1539,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:33:06.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-70d5df24-50ac-4a70-92d5-0c72f4cae2f9 STEP: Creating a pod to test consume secrets Apr 6 21:33:06.173: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41" in namespace "projected-5555" to be "success or failure" Apr 6 21:33:06.177: INFO: Pod "pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371453ms Apr 6 21:33:08.181: INFO: Pod "pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00834594s Apr 6 21:33:10.185: INFO: Pod "pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012798739s STEP: Saw pod success Apr 6 21:33:10.185: INFO: Pod "pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41" satisfied condition "success or failure" Apr 6 21:33:10.188: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41 container projected-secret-volume-test: STEP: delete the pod Apr 6 21:33:10.209: INFO: Waiting for pod pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41 to disappear Apr 6 21:33:10.213: INFO: Pod pod-projected-secrets-00d69e54-f49d-4376-ac5b-6daca4feab41 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:33:10.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5555" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1554,"failed":0} S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:33:10.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3732.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3732.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3732.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3732.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3732.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3732.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 21:33:16.333: INFO: DNS probes using dns-3732/dns-test-97eff66e-118b-4b9d-a5a3-e6f36d15cce2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:33:16.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3732" for this suite. • [SLOW TEST:6.260 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":94,"skipped":1555,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:33:16.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-52bacb98-f917-4ce9-a0b4-6cd1ec637071 STEP: Creating configMap with name cm-test-opt-upd-839f6535-6fb9-47e7-985c-28eed1046498 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-52bacb98-f917-4ce9-a0b4-6cd1ec637071 STEP: Updating configmap cm-test-opt-upd-839f6535-6fb9-47e7-985c-28eed1046498 STEP: Creating configMap with name cm-test-opt-create-26b422e7-6422-4117-8597-3b7be7b45a50 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:33:24.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2106" for this suite. • [SLOW TEST:8.506 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1560,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:33:24.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 6 21:33:25.090: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5019 /api/v1/namespaces/watch-5019/configmaps/e2e-watch-test-label-changed f2aaa9db-214e-4ad6-8edf-a299446a3d53 5977469 0 2020-04-06 21:33:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 6 21:33:25.091: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5019 /api/v1/namespaces/watch-5019/configmaps/e2e-watch-test-label-changed f2aaa9db-214e-4ad6-8edf-a299446a3d53 5977470 0 2020-04-06 21:33:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 6 21:33:25.091: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5019 /api/v1/namespaces/watch-5019/configmaps/e2e-watch-test-label-changed f2aaa9db-214e-4ad6-8edf-a299446a3d53 5977471 0 2020-04-06 21:33:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 6 21:33:35.114: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5019 /api/v1/namespaces/watch-5019/configmaps/e2e-watch-test-label-changed f2aaa9db-214e-4ad6-8edf-a299446a3d53 5977521 0 2020-04-06 21:33:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 6 21:33:35.114: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5019 /api/v1/namespaces/watch-5019/configmaps/e2e-watch-test-label-changed f2aaa9db-214e-4ad6-8edf-a299446a3d53 5977522 0 2020-04-06 21:33:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 6 21:33:35.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5019 /api/v1/namespaces/watch-5019/configmaps/e2e-watch-test-label-changed f2aaa9db-214e-4ad6-8edf-a299446a3d53 5977523 0 2020-04-06 21:33:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:33:35.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5019" for this suite. • [SLOW TEST:10.136 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":96,"skipped":1571,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:33:35.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:33:35.199: INFO: Waiting up to 5m0s for pod "downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5" in namespace "projected-1454" to be "success or failure" Apr 6 21:33:35.218: INFO: Pod "downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.165961ms Apr 6 21:33:37.222: INFO: Pod "downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022508768s Apr 6 21:33:39.227: INFO: Pod "downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027960983s STEP: Saw pod success Apr 6 21:33:39.227: INFO: Pod "downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5" satisfied condition "success or failure" Apr 6 21:33:39.254: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5 container client-container: STEP: delete the pod Apr 6 21:33:39.295: INFO: Waiting for pod downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5 to disappear Apr 6 21:33:39.304: INFO: Pod downwardapi-volume-966e02ea-6715-439c-b939-52b945512fe5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:33:39.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1454" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1587,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:33:39.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0406 21:34:19.648269 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 6 21:34:19.648: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:34:19.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6702" for this suite. • [SLOW TEST:40.343 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":98,"skipped":1596,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:34:19.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 6 21:34:20.560: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 6 21:34:22.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805660, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805660, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805660, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805660, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:34:25.610: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:34:25.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:34:27.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7076" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.081 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":99,"skipped":1610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:34:27.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-615f7dc6-58bf-4541-a22f-ec12e7074357 STEP: Creating a pod to test consume secrets Apr 6 21:34:27.967: INFO: Waiting up to 5m0s for pod "pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36" in namespace "secrets-6991" to be "success or failure" Apr 6 21:34:28.099: INFO: Pod "pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36": Phase="Pending", Reason="", readiness=false. Elapsed: 132.301685ms Apr 6 21:34:30.103: INFO: Pod "pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136200295s Apr 6 21:34:32.107: INFO: Pod "pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140297213s STEP: Saw pod success Apr 6 21:34:32.107: INFO: Pod "pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36" satisfied condition "success or failure" Apr 6 21:34:32.110: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36 container secret-volume-test: STEP: delete the pod Apr 6 21:34:32.127: INFO: Waiting for pod pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36 to disappear Apr 6 21:34:32.131: INFO: Pod pod-secrets-ce2826d2-2fc9-4aac-a04d-d040b9031a36 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:34:32.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6991" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1653,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:34:32.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 6 21:34:32.231: INFO: Waiting up to 5m0s for pod "downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb" in namespace "downward-api-5348" to be "success or failure" Apr 6 21:34:32.239: INFO: Pod "downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124708ms Apr 6 21:34:34.242: INFO: Pod "downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011771232s Apr 6 21:34:36.247: INFO: Pod "downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015921847s STEP: Saw pod success Apr 6 21:34:36.247: INFO: Pod "downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb" satisfied condition "success or failure" Apr 6 21:34:36.249: INFO: Trying to get logs from node jerma-worker2 pod downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb container dapi-container: STEP: delete the pod Apr 6 21:34:36.271: INFO: Waiting for pod downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb to disappear Apr 6 21:34:36.275: INFO: Pod downward-api-2ad8d998-66d9-4d02-8712-3373a7417efb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:34:36.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5348" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1656,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:34:36.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:34:40.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3814" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:34:40.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 6 21:34:40.475: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:34:59.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9257" for this suite. • [SLOW TEST:19.091 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1689,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:34:59.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 6 21:35:07.624: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 6 21:35:07.629: INFO: Pod pod-with-poststart-exec-hook still exists Apr 6 21:35:09.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 6 21:35:09.633: INFO: Pod pod-with-poststart-exec-hook still exists Apr 6 21:35:11.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 6 21:35:11.633: INFO: Pod pod-with-poststart-exec-hook still exists Apr 6 21:35:13.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 6 21:35:13.633: INFO: Pod pod-with-poststart-exec-hook still exists Apr 6 21:35:15.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 6 21:35:15.634: INFO: Pod pod-with-poststart-exec-hook still exists Apr 6 21:35:17.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 6 21:35:17.634: INFO: Pod pod-with-poststart-exec-hook still exists Apr 6 21:35:19.629: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 6 21:35:19.633: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:35:19.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4274" for this suite. • [SLOW TEST:20.156 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1704,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:35:19.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:35:19.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 6 21:35:20.299: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-06T21:35:20Z generation:1 name:name1 resourceVersion:5978250 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:80dc4422-300d-48e9-94d2-65d3336a7c3c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 6 21:35:30.303: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-06T21:35:30Z generation:1 name:name2 resourceVersion:5978304 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:47ae14b9-1648-4cf1-8732-329881e7d98e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 6 21:35:40.309: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-06T21:35:20Z generation:2 name:name1 resourceVersion:5978333 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:80dc4422-300d-48e9-94d2-65d3336a7c3c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 6 21:35:50.319: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-06T21:35:30Z generation:2 name:name2 resourceVersion:5978364 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:47ae14b9-1648-4cf1-8732-329881e7d98e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 6 21:36:00.327: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-06T21:35:20Z generation:2 name:name1 resourceVersion:5978394 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:80dc4422-300d-48e9-94d2-65d3336a7c3c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 6 21:36:10.335: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-06T21:35:30Z generation:2 name:name2 resourceVersion:5978424 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:47ae14b9-1648-4cf1-8732-329881e7d98e] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:36:20.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9326" for this suite. • [SLOW TEST:61.212 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":105,"skipped":1710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:36:20.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-1558318c-c172-4074-a880-ff74187ceecf STEP: Creating a pod to test consume configMaps Apr 6 21:36:20.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135" in namespace "configmap-9941" to be "success or failure" Apr 6 21:36:20.979: INFO: Pod "pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135": Phase="Pending", Reason="", readiness=false. Elapsed: 9.779723ms Apr 6 21:36:23.015: INFO: Pod "pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045988444s Apr 6 21:36:25.019: INFO: Pod "pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04915547s STEP: Saw pod success Apr 6 21:36:25.019: INFO: Pod "pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135" satisfied condition "success or failure" Apr 6 21:36:25.021: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135 container configmap-volume-test: STEP: delete the pod Apr 6 21:36:25.076: INFO: Waiting for pod pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135 to disappear Apr 6 21:36:25.081: INFO: Pod pod-configmaps-3cf83b95-897a-42ca-b513-c7260485d135 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:36:25.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9941" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1735,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:36:25.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 6 21:36:29.691: INFO: Successfully updated pod "pod-update-activedeadlineseconds-853866b5-d425-42ab-acb8-59c2db9d9481" Apr 6 21:36:29.691: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-853866b5-d425-42ab-acb8-59c2db9d9481" in namespace "pods-3740" to be "terminated due to deadline exceeded" Apr 6 21:36:29.720: INFO: Pod "pod-update-activedeadlineseconds-853866b5-d425-42ab-acb8-59c2db9d9481": Phase="Running", Reason="", readiness=true. Elapsed: 29.636919ms Apr 6 21:36:31.724: INFO: Pod "pod-update-activedeadlineseconds-853866b5-d425-42ab-acb8-59c2db9d9481": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.033318996s Apr 6 21:36:31.724: INFO: Pod "pod-update-activedeadlineseconds-853866b5-d425-42ab-acb8-59c2db9d9481" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:36:31.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3740" for this suite. • [SLOW TEST:6.642 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1748,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:36:31.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:36:32.146: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:36:34.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805792, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805792, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805792, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805792, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:36:37.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:36:37.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8459-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:36:38.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3969" for this suite. STEP: Destroying namespace "webhook-3969-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.827 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":108,"skipped":1756,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:36:38.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 6 21:36:38.595: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:36:43.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7894" for this suite. • [SLOW TEST:5.432 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":109,"skipped":1774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:36:43.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6bcc04e8-1833-40a9-b834-5b0c644cb23d STEP: Creating a pod to test consume configMaps Apr 6 21:36:44.069: INFO: Waiting up to 5m0s for pod "pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6" in namespace "configmap-7468" to be "success or failure" Apr 6 21:36:44.090: INFO: Pod "pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.915245ms Apr 6 21:36:46.094: INFO: Pod "pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024775518s Apr 6 21:36:48.098: INFO: Pod "pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029233359s STEP: Saw pod success Apr 6 21:36:48.098: INFO: Pod "pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6" satisfied condition "success or failure" Apr 6 21:36:48.101: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6 container configmap-volume-test: STEP: delete the pod Apr 6 21:36:48.150: INFO: Waiting for pod pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6 to disappear Apr 6 21:36:48.163: INFO: Pod pod-configmaps-6dc95045-af38-492e-bd25-88afda3189d6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:36:48.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7468" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1812,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:36:48.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 6 21:36:52.779: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1179 pod-service-account-640f22dc-753d-46ab-b3aa-1b9a380002b9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 6 21:36:55.446: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1179 pod-service-account-640f22dc-753d-46ab-b3aa-1b9a380002b9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 6 21:36:55.665: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1179 pod-service-account-640f22dc-753d-46ab-b3aa-1b9a380002b9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:36:55.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1179" for this suite. • [SLOW TEST:7.715 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":111,"skipped":1813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:36:55.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3a7f98fc-8b7f-4365-af7e-d620ba1b2130 STEP: Creating a pod to test consume secrets Apr 6 21:36:55.977: INFO: Waiting up to 5m0s for pod "pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93" in namespace "secrets-4823" to be "success or failure" Apr 6 21:36:56.002: INFO: Pod "pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93": Phase="Pending", Reason="", readiness=false. Elapsed: 24.941727ms Apr 6 21:36:58.006: INFO: Pod "pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029117326s Apr 6 21:37:00.010: INFO: Pod "pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033364479s STEP: Saw pod success Apr 6 21:37:00.010: INFO: Pod "pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93" satisfied condition "success or failure" Apr 6 21:37:00.013: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93 container secret-volume-test: STEP: delete the pod Apr 6 21:37:00.035: INFO: Waiting for pod pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93 to disappear Apr 6 21:37:00.040: INFO: Pod pod-secrets-7fd5a7a0-0b68-4c33-9d36-e3b57ca36e93 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:37:00.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4823" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:37:00.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-b65z STEP: Creating a pod to test atomic-volume-subpath Apr 6 21:37:00.130: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-b65z" in namespace "subpath-2045" to be "success or failure" Apr 6 21:37:00.147: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Pending", Reason="", readiness=false. Elapsed: 16.791773ms Apr 6 21:37:02.151: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021150542s Apr 6 21:37:04.155: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 4.024983614s Apr 6 21:37:06.159: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 6.028765752s Apr 6 21:37:08.163: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 8.033040512s Apr 6 21:37:10.167: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 10.037049262s Apr 6 21:37:12.171: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 12.041134259s Apr 6 21:37:14.175: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 14.0451131s Apr 6 21:37:16.180: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 16.049886924s Apr 6 21:37:18.184: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 18.053818312s Apr 6 21:37:20.188: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 20.057509499s Apr 6 21:37:22.192: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Running", Reason="", readiness=true. Elapsed: 22.061243528s Apr 6 21:37:24.213: INFO: Pod "pod-subpath-test-secret-b65z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.082329805s STEP: Saw pod success Apr 6 21:37:24.213: INFO: Pod "pod-subpath-test-secret-b65z" satisfied condition "success or failure" Apr 6 21:37:24.216: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-b65z container test-container-subpath-secret-b65z: STEP: delete the pod Apr 6 21:37:24.251: INFO: Waiting for pod pod-subpath-test-secret-b65z to disappear Apr 6 21:37:24.260: INFO: Pod pod-subpath-test-secret-b65z no longer exists STEP: Deleting pod pod-subpath-test-secret-b65z Apr 6 21:37:24.260: INFO: Deleting pod "pod-subpath-test-secret-b65z" in namespace "subpath-2045" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:37:24.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2045" for this suite. • [SLOW TEST:24.222 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":113,"skipped":1878,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:37:24.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:37:25.061: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:37:27.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805845, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805845, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805845, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805844, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:37:30.213: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:37:30.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8257" for this suite. STEP: Destroying namespace "webhook-8257-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.469 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":114,"skipped":1920,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:37:30.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 6 21:37:30.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5984' Apr 6 21:37:30.915: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 6 21:37:30.915: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 6 21:37:30.930: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 6 21:37:30.947: INFO: scanned /root for discovery docs: Apr 6 21:37:30.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5984' Apr 6 21:37:46.976: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 6 21:37:46.976: INFO: stdout: "Created e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8\nScaling up e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 6 21:37:46.976: INFO: stdout: "Created e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8\nScaling up e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 6 21:37:46.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5984' Apr 6 21:37:47.062: INFO: stderr: "" Apr 6 21:37:47.062: INFO: stdout: "e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8-dd7cg " Apr 6 21:37:47.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8-dd7cg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5984' Apr 6 21:37:47.158: INFO: stderr: "" Apr 6 21:37:47.158: INFO: stdout: "true" Apr 6 21:37:47.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8-dd7cg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5984' Apr 6 21:37:47.253: INFO: stderr: "" Apr 6 21:37:47.253: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 6 21:37:47.253: INFO: e2e-test-httpd-rc-0ac688a730a544ed261c478f5fdd1bf8-dd7cg is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 6 21:37:47.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5984' Apr 6 21:37:47.353: INFO: stderr: "" Apr 6 21:37:47.353: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:37:47.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5984" for this suite. • [SLOW TEST:16.621 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":115,"skipped":1927,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:37:47.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 6 21:37:47.421: INFO: Waiting up to 5m0s for pod "pod-05ac0869-6970-46e0-a365-6aed0ef16c51" in namespace "emptydir-8976" to be "success or failure" Apr 6 21:37:47.457: INFO: Pod "pod-05ac0869-6970-46e0-a365-6aed0ef16c51": Phase="Pending", Reason="", readiness=false. Elapsed: 36.674423ms Apr 6 21:37:49.462: INFO: Pod "pod-05ac0869-6970-46e0-a365-6aed0ef16c51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040968381s Apr 6 21:37:51.466: INFO: Pod "pod-05ac0869-6970-46e0-a365-6aed0ef16c51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045262076s STEP: Saw pod success Apr 6 21:37:51.466: INFO: Pod "pod-05ac0869-6970-46e0-a365-6aed0ef16c51" satisfied condition "success or failure" Apr 6 21:37:51.469: INFO: Trying to get logs from node jerma-worker pod pod-05ac0869-6970-46e0-a365-6aed0ef16c51 container test-container: STEP: delete the pod Apr 6 21:37:51.512: INFO: Waiting for pod pod-05ac0869-6970-46e0-a365-6aed0ef16c51 to disappear Apr 6 21:37:51.521: INFO: Pod pod-05ac0869-6970-46e0-a365-6aed0ef16c51 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:37:51.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8976" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1986,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:37:51.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 6 21:37:51.580: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 6 21:37:51.598: INFO: Waiting for terminating namespaces to be deleted... Apr 6 21:37:51.600: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 6 21:37:51.604: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:37:51.604: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:37:51.604: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:37:51.604: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:37:51.604: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 6 21:37:51.609: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 6 21:37:51.609: INFO: Container kube-hunter ready: false, restart count 0 Apr 6 21:37:51.609: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:37:51.609: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:37:51.609: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 6 21:37:51.609: INFO: Container kube-bench ready: false, restart count 0 Apr 6 21:37:51.609: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:37:51.609: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160358bf1dbaa013], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:37:52.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3004" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":117,"skipped":1987,"failed":0} ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:37:52.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:37:52.724: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3" in namespace "downward-api-7927" to be "success or failure" Apr 6 21:37:52.734: INFO: Pod "downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.748866ms Apr 6 21:37:54.739: INFO: Pod "downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015100452s Apr 6 21:37:56.741: INFO: Pod "downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01792607s STEP: Saw pod success Apr 6 21:37:56.742: INFO: Pod "downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3" satisfied condition "success or failure" Apr 6 21:37:56.744: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3 container client-container: STEP: delete the pod Apr 6 21:37:56.759: INFO: Waiting for pod downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3 to disappear Apr 6 21:37:56.771: INFO: Pod downwardapi-volume-0a2114a6-8c9a-440a-ab38-18f2592e01f3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:37:56.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7927" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1987,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:37:56.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:37:56.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae" in namespace "downward-api-6619" to be "success or failure" Apr 6 21:37:56.888: INFO: Pod "downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae": Phase="Pending", Reason="", readiness=false. Elapsed: 25.357506ms Apr 6 21:37:58.913: INFO: Pod "downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050534149s Apr 6 21:38:00.917: INFO: Pod "downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054611899s STEP: Saw pod success Apr 6 21:38:00.917: INFO: Pod "downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae" satisfied condition "success or failure" Apr 6 21:38:00.920: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae container client-container: STEP: delete the pod Apr 6 21:38:00.987: INFO: Waiting for pod downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae to disappear Apr 6 21:38:01.000: INFO: Pod downwardapi-volume-3481f061-3e0d-480f-81d0-7a9da9457fae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:01.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6619" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:01.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:38:01.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:38:03.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805881, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805881, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805881, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805881, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:38:06.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:38:06.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3019-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:07.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2855" for this suite. STEP: Destroying namespace "webhook-2855-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.750 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":120,"skipped":2046,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:07.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-7a5eabd7-d3c6-4291-b381-946de00cbf59 STEP: Creating a pod to test consume configMaps Apr 6 21:38:07.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9" in namespace "configmap-8439" to be "success or failure" Apr 6 21:38:07.928: INFO: Pod "pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9": Phase="Pending", Reason="", readiness=false. Elapsed: 64.405557ms Apr 6 21:38:09.931: INFO: Pod "pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067939623s Apr 6 21:38:11.936: INFO: Pod "pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072885179s STEP: Saw pod success Apr 6 21:38:11.936: INFO: Pod "pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9" satisfied condition "success or failure" Apr 6 21:38:11.939: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9 container configmap-volume-test: STEP: delete the pod Apr 6 21:38:11.976: INFO: Waiting for pod pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9 to disappear Apr 6 21:38:11.993: INFO: Pod pod-configmaps-a8bac477-9eed-487f-87bc-017665f923a9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:11.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8439" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:12.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 6 21:38:12.060: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 6 21:38:12.078: INFO: Waiting for terminating namespaces to be deleted... Apr 6 21:38:12.080: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 6 21:38:12.085: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:38:12.085: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:38:12.085: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:38:12.085: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:38:12.085: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 6 21:38:12.090: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:38:12.090: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:38:12.090: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 6 21:38:12.090: INFO: Container kube-bench ready: false, restart count 0 Apr 6 21:38:12.090: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:38:12.090: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:38:12.090: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 6 21:38:12.090: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ca8bb5ef-0a3d-4e9a-b4f6-51783cbeb103 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ca8bb5ef-0a3d-4e9a-b4f6-51783cbeb103 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ca8bb5ef-0a3d-4e9a-b4f6-51783cbeb103 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:20.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1913" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.317 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":122,"skipped":2115,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:20.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:38:20.893: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:38:22.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805900, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805900, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805900, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721805900, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:38:25.968: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:26.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-462" for this suite. STEP: Destroying namespace "webhook-462-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.305 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":123,"skipped":2130,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:26.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 6 21:38:26.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7232' Apr 6 21:38:26.778: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 6 21:38:26.778: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 6 21:38:28.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7232' Apr 6 21:38:28.945: INFO: stderr: "" Apr 6 21:38:28.945: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:28.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7232" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":124,"skipped":2135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:28.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:38:29.122: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:35.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5950" for this suite. • [SLOW TEST:6.595 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":125,"skipped":2167,"failed":0} [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:35.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:38:55.621: INFO: Container started at 2020-04-06 21:38:37 +0000 UTC, pod became ready at 2020-04-06 21:38:54 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:55.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9459" for this suite. • [SLOW TEST:20.081 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:55.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d22e6312-b1f8-4b06-be0e-cb985937b774 STEP: Creating a pod to test consume secrets Apr 6 21:38:55.717: INFO: Waiting up to 5m0s for pod "pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179" in namespace "secrets-866" to be "success or failure" Apr 6 21:38:55.726: INFO: Pod "pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179": Phase="Pending", Reason="", readiness=false. Elapsed: 9.359288ms Apr 6 21:38:57.730: INFO: Pod "pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0133602s Apr 6 21:38:59.735: INFO: Pod "pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017839575s STEP: Saw pod success Apr 6 21:38:59.735: INFO: Pod "pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179" satisfied condition "success or failure" Apr 6 21:38:59.738: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179 container secret-volume-test: STEP: delete the pod Apr 6 21:38:59.769: INFO: Waiting for pod pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179 to disappear Apr 6 21:38:59.794: INFO: Pod pod-secrets-dd753e8a-d295-4993-9e0d-b499fcf95179 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:38:59.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-866" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2207,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:38:59.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:38:59.879: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-db74ea7a-fbe4-47c7-8cc8-9763b006e205" in namespace "security-context-test-8469" to be "success or failure" Apr 6 21:38:59.894: INFO: Pod "alpine-nnp-false-db74ea7a-fbe4-47c7-8cc8-9763b006e205": Phase="Pending", Reason="", readiness=false. Elapsed: 15.226713ms Apr 6 21:39:01.957: INFO: Pod "alpine-nnp-false-db74ea7a-fbe4-47c7-8cc8-9763b006e205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077761038s Apr 6 21:39:03.960: INFO: Pod "alpine-nnp-false-db74ea7a-fbe4-47c7-8cc8-9763b006e205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081413122s Apr 6 21:39:03.960: INFO: Pod "alpine-nnp-false-db74ea7a-fbe4-47c7-8cc8-9763b006e205" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:39:03.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8469" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2214,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:39:03.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0406 21:39:34.590721 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 6 21:39:34.590: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:39:34.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9802" for this suite. • [SLOW TEST:30.623 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":129,"skipped":2226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:39:34.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-3964 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3964 to expose endpoints map[] Apr 6 21:39:34.708: INFO: Get endpoints failed (2.785701ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 6 21:39:35.712: INFO: successfully validated that service endpoint-test2 in namespace services-3964 exposes endpoints map[] (1.00726748s elapsed) STEP: Creating pod pod1 in namespace services-3964 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3964 to expose endpoints map[pod1:[80]] Apr 6 21:39:39.779: INFO: successfully validated that service endpoint-test2 in namespace services-3964 exposes endpoints map[pod1:[80]] (4.061014755s elapsed) STEP: Creating pod pod2 in namespace services-3964 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3964 to expose endpoints map[pod1:[80] pod2:[80]] Apr 6 21:39:42.888: INFO: successfully validated that service endpoint-test2 in namespace services-3964 exposes endpoints map[pod1:[80] pod2:[80]] (3.099396988s elapsed) STEP: Deleting pod pod1 in namespace services-3964 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3964 to expose endpoints map[pod2:[80]] Apr 6 21:39:43.970: INFO: successfully validated that service endpoint-test2 in namespace services-3964 exposes endpoints map[pod2:[80]] (1.076126811s elapsed) STEP: Deleting pod pod2 in namespace services-3964 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3964 to expose endpoints map[] Apr 6 21:39:45.006: INFO: successfully validated that service endpoint-test2 in namespace services-3964 exposes endpoints map[] (1.030464717s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:39:45.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3964" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.612 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":130,"skipped":2249,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:39:45.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 6 21:39:45.263: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:39:59.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9578" for this suite. • [SLOW TEST:14.755 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":131,"skipped":2259,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:39:59.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:40:00.056: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 17.157122ms) Apr 6 21:40:00.059: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.654135ms) Apr 6 21:40:00.062: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.255537ms) Apr 6 21:40:00.065: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.291417ms) Apr 6 21:40:00.068: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.660693ms) Apr 6 21:40:00.072: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.567206ms) Apr 6 21:40:00.074: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.765487ms) Apr 6 21:40:00.077: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.987122ms) Apr 6 21:40:00.080: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.955168ms) Apr 6 21:40:00.084: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.316358ms) Apr 6 21:40:00.087: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.104596ms) Apr 6 21:40:00.091: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.71352ms) Apr 6 21:40:00.094: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.128875ms) Apr 6 21:40:00.097: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.261802ms) Apr 6 21:40:00.101: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.01919ms) Apr 6 21:40:00.105: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.904351ms) Apr 6 21:40:00.109: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.649877ms) Apr 6 21:40:00.113: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.692648ms) Apr 6 21:40:00.116: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.609935ms) Apr 6 21:40:00.143: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 26.850599ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:00.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7019" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":132,"skipped":2275,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:00.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:40:00.201: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:00.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-373" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":133,"skipped":2275,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:00.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:11.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7760" for this suite. • [SLOW TEST:11.156 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":134,"skipped":2275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:11.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:18.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3454" for this suite. STEP: Destroying namespace "nsdeletetest-1185" for this suite. Apr 6 21:40:18.206: INFO: Namespace nsdeletetest-1185 was already deleted STEP: Destroying namespace "nsdeletetest-833" for this suite. • [SLOW TEST:6.263 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":135,"skipped":2319,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:18.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 6 21:40:18.264: INFO: Waiting up to 5m0s for pod "var-expansion-a10eacee-e721-4152-badc-13617166e131" in namespace "var-expansion-4504" to be "success or failure" Apr 6 21:40:18.266: INFO: Pod "var-expansion-a10eacee-e721-4152-badc-13617166e131": Phase="Pending", Reason="", readiness=false. Elapsed: 2.495003ms Apr 6 21:40:20.270: INFO: Pod "var-expansion-a10eacee-e721-4152-badc-13617166e131": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006374032s Apr 6 21:40:22.275: INFO: Pod "var-expansion-a10eacee-e721-4152-badc-13617166e131": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010645649s STEP: Saw pod success Apr 6 21:40:22.275: INFO: Pod "var-expansion-a10eacee-e721-4152-badc-13617166e131" satisfied condition "success or failure" Apr 6 21:40:22.277: INFO: Trying to get logs from node jerma-worker pod var-expansion-a10eacee-e721-4152-badc-13617166e131 container dapi-container: STEP: delete the pod Apr 6 21:40:22.291: INFO: Waiting for pod var-expansion-a10eacee-e721-4152-badc-13617166e131 to disappear Apr 6 21:40:22.346: INFO: Pod var-expansion-a10eacee-e721-4152-badc-13617166e131 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:22.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4504" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:22.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:40:22.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7" in namespace "downward-api-269" to be "success or failure" Apr 6 21:40:22.416: INFO: Pod "downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331922ms Apr 6 21:40:24.420: INFO: Pod "downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006998172s Apr 6 21:40:26.424: INFO: Pod "downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011177632s STEP: Saw pod success Apr 6 21:40:26.424: INFO: Pod "downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7" satisfied condition "success or failure" Apr 6 21:40:26.427: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7 container client-container: STEP: delete the pod Apr 6 21:40:26.444: INFO: Waiting for pod downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7 to disappear Apr 6 21:40:26.455: INFO: Pod downwardapi-volume-e02d98c5-e053-4761-ae0c-fc67866be2f7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:26.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-269" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2369,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:26.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0c4c75c6-90c8-4255-a50d-e7e2771a2b12 STEP: Creating a pod to test consume configMaps Apr 6 21:40:26.613: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064" in namespace "projected-5599" to be "success or failure" Apr 6 21:40:26.658: INFO: Pod "pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064": Phase="Pending", Reason="", readiness=false. Elapsed: 45.053425ms Apr 6 21:40:28.662: INFO: Pod "pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048603033s Apr 6 21:40:30.666: INFO: Pod "pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052949498s STEP: Saw pod success Apr 6 21:40:30.666: INFO: Pod "pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064" satisfied condition "success or failure" Apr 6 21:40:30.669: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064 container projected-configmap-volume-test: STEP: delete the pod Apr 6 21:40:30.701: INFO: Waiting for pod pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064 to disappear Apr 6 21:40:30.706: INFO: Pod pod-projected-configmaps-5e1d3850-c34e-46f8-a68c-6f59e0a4c064 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:30.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5599" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:30.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:40:30.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec" in namespace "downward-api-1252" to be "success or failure" Apr 6 21:40:30.821: INFO: Pod "downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.515251ms Apr 6 21:40:32.825: INFO: Pod "downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007256623s Apr 6 21:40:34.829: INFO: Pod "downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011396304s STEP: Saw pod success Apr 6 21:40:34.829: INFO: Pod "downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec" satisfied condition "success or failure" Apr 6 21:40:34.832: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec container client-container: STEP: delete the pod Apr 6 21:40:34.868: INFO: Waiting for pod downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec to disappear Apr 6 21:40:34.880: INFO: Pod downwardapi-volume-ce2c5edf-f12a-4458-b70a-86b1429937ec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:34.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1252" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:34.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 6 21:40:34.982: INFO: Waiting up to 5m0s for pod "pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c" in namespace "emptydir-5585" to be "success or failure" Apr 6 21:40:34.986: INFO: Pod "pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081386ms Apr 6 21:40:36.990: INFO: Pod "pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008407598s Apr 6 21:40:38.993: INFO: Pod "pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011857721s STEP: Saw pod success Apr 6 21:40:38.994: INFO: Pod "pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c" satisfied condition "success or failure" Apr 6 21:40:38.996: INFO: Trying to get logs from node jerma-worker2 pod pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c container test-container: STEP: delete the pod Apr 6 21:40:39.053: INFO: Waiting for pod pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c to disappear Apr 6 21:40:39.057: INFO: Pod pod-7e90a1c0-1123-4d80-b3a4-72bc3911e18c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:40:39.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5585" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2427,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:40:39.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4153 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 6 21:40:39.108: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 6 21:41:05.225: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.215 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4153 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:41:05.225: INFO: >>> kubeConfig: /root/.kube/config I0406 21:41:05.260617 6 log.go:172] (0xc0027cd8c0) (0xc002730aa0) Create stream I0406 21:41:05.260650 6 log.go:172] (0xc0027cd8c0) (0xc002730aa0) Stream added, broadcasting: 1 I0406 21:41:05.262675 6 log.go:172] (0xc0027cd8c0) Reply frame received for 1 I0406 21:41:05.262726 6 log.go:172] (0xc0027cd8c0) (0xc001f0caa0) Create stream I0406 21:41:05.262748 6 log.go:172] (0xc0027cd8c0) (0xc001f0caa0) Stream added, broadcasting: 3 I0406 21:41:05.263754 6 log.go:172] (0xc0027cd8c0) Reply frame received for 3 I0406 21:41:05.263811 6 log.go:172] (0xc0027cd8c0) (0xc001ecc000) Create stream I0406 21:41:05.263827 6 log.go:172] (0xc0027cd8c0) (0xc001ecc000) Stream added, broadcasting: 5 I0406 21:41:05.264958 6 log.go:172] (0xc0027cd8c0) Reply frame received for 5 I0406 21:41:06.332358 6 log.go:172] (0xc0027cd8c0) Data frame received for 3 I0406 21:41:06.332388 6 log.go:172] (0xc001f0caa0) (3) Data frame handling I0406 21:41:06.332403 6 log.go:172] (0xc001f0caa0) (3) Data frame sent I0406 21:41:06.332411 6 log.go:172] (0xc0027cd8c0) Data frame received for 3 I0406 21:41:06.332420 6 log.go:172] (0xc001f0caa0) (3) Data frame handling I0406 21:41:06.332672 6 log.go:172] (0xc0027cd8c0) Data frame received for 5 I0406 21:41:06.332690 6 log.go:172] (0xc001ecc000) (5) Data frame handling I0406 21:41:06.334495 6 log.go:172] (0xc0027cd8c0) Data frame received for 1 I0406 21:41:06.334615 6 log.go:172] (0xc002730aa0) (1) Data frame handling I0406 21:41:06.334647 6 log.go:172] (0xc002730aa0) (1) Data frame sent I0406 21:41:06.334665 6 log.go:172] (0xc0027cd8c0) (0xc002730aa0) Stream removed, broadcasting: 1 I0406 21:41:06.334686 6 log.go:172] (0xc0027cd8c0) Go away received I0406 21:41:06.335042 6 log.go:172] (0xc0027cd8c0) (0xc002730aa0) Stream removed, broadcasting: 1 I0406 21:41:06.335068 6 log.go:172] (0xc0027cd8c0) (0xc001f0caa0) Stream removed, broadcasting: 3 I0406 21:41:06.335088 6 log.go:172] (0xc0027cd8c0) (0xc001ecc000) Stream removed, broadcasting: 5 Apr 6 21:41:06.335: INFO: Found all expected endpoints: [netserver-0] Apr 6 21:41:06.338: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.30 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4153 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:41:06.338: INFO: >>> kubeConfig: /root/.kube/config I0406 21:41:06.364028 6 log.go:172] (0xc0008e5290) (0xc001d30640) Create stream I0406 21:41:06.364061 6 log.go:172] (0xc0008e5290) (0xc001d30640) Stream added, broadcasting: 1 I0406 21:41:06.365609 6 log.go:172] (0xc0008e5290) Reply frame received for 1 I0406 21:41:06.365640 6 log.go:172] (0xc0008e5290) (0xc001f0cbe0) Create stream I0406 21:41:06.365651 6 log.go:172] (0xc0008e5290) (0xc001f0cbe0) Stream added, broadcasting: 3 I0406 21:41:06.366379 6 log.go:172] (0xc0008e5290) Reply frame received for 3 I0406 21:41:06.366406 6 log.go:172] (0xc0008e5290) (0xc001f0cd20) Create stream I0406 21:41:06.366417 6 log.go:172] (0xc0008e5290) (0xc001f0cd20) Stream added, broadcasting: 5 I0406 21:41:06.367142 6 log.go:172] (0xc0008e5290) Reply frame received for 5 I0406 21:41:07.453326 6 log.go:172] (0xc0008e5290) Data frame received for 3 I0406 21:41:07.453520 6 log.go:172] (0xc001f0cbe0) (3) Data frame handling I0406 21:41:07.453588 6 log.go:172] (0xc001f0cbe0) (3) Data frame sent I0406 21:41:07.453735 6 log.go:172] (0xc0008e5290) Data frame received for 5 I0406 21:41:07.453782 6 log.go:172] (0xc001f0cd20) (5) Data frame handling I0406 21:41:07.453813 6 log.go:172] (0xc0008e5290) Data frame received for 3 I0406 21:41:07.453830 6 log.go:172] (0xc001f0cbe0) (3) Data frame handling I0406 21:41:07.455399 6 log.go:172] (0xc0008e5290) Data frame received for 1 I0406 21:41:07.455433 6 log.go:172] (0xc001d30640) (1) Data frame handling I0406 21:41:07.455462 6 log.go:172] (0xc001d30640) (1) Data frame sent I0406 21:41:07.455494 6 log.go:172] (0xc0008e5290) (0xc001d30640) Stream removed, broadcasting: 1 I0406 21:41:07.455534 6 log.go:172] (0xc0008e5290) Go away received I0406 21:41:07.455592 6 log.go:172] (0xc0008e5290) (0xc001d30640) Stream removed, broadcasting: 1 I0406 21:41:07.455609 6 log.go:172] (0xc0008e5290) (0xc001f0cbe0) Stream removed, broadcasting: 3 I0406 21:41:07.455616 6 log.go:172] (0xc0008e5290) (0xc001f0cd20) Stream removed, broadcasting: 5 Apr 6 21:41:07.455: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:41:07.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4153" for this suite. • [SLOW TEST:28.398 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2439,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:41:07.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:41:07.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e" in namespace "downward-api-6198" to be "success or failure" Apr 6 21:41:07.549: INFO: Pod "downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476828ms Apr 6 21:41:09.554: INFO: Pod "downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009053392s Apr 6 21:41:11.558: INFO: Pod "downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013489767s STEP: Saw pod success Apr 6 21:41:11.558: INFO: Pod "downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e" satisfied condition "success or failure" Apr 6 21:41:11.561: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e container client-container: STEP: delete the pod Apr 6 21:41:11.624: INFO: Waiting for pod downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e to disappear Apr 6 21:41:11.633: INFO: Pod downwardapi-volume-641c3687-b916-4528-acff-3e7a54977d9e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:41:11.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6198" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:41:11.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:41:15.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8204" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2466,"failed":0} SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:41:15.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:41:15.827: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7078 I0406 21:41:15.845838 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7078, replica count: 1 I0406 21:41:16.896274 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:41:17.896527 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:41:18.896773 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:41:19.897037 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 6 21:41:20.028: INFO: Created: latency-svc-2zn9g Apr 6 21:41:20.079: INFO: Got endpoints: latency-svc-2zn9g [82.423032ms] Apr 6 21:41:20.136: INFO: Created: latency-svc-br26j Apr 6 21:41:20.160: INFO: Got endpoints: latency-svc-br26j [80.139663ms] Apr 6 21:41:20.216: INFO: Created: latency-svc-stlwp Apr 6 21:41:20.219: INFO: Got endpoints: latency-svc-stlwp [139.62488ms] Apr 6 21:41:20.244: INFO: Created: latency-svc-5pb4c Apr 6 21:41:20.257: INFO: Got endpoints: latency-svc-5pb4c [177.72696ms] Apr 6 21:41:20.280: INFO: Created: latency-svc-4t429 Apr 6 21:41:20.326: INFO: Got endpoints: latency-svc-4t429 [246.651048ms] Apr 6 21:41:20.373: INFO: Created: latency-svc-xtwkb Apr 6 21:41:20.378: INFO: Got endpoints: latency-svc-xtwkb [298.056297ms] Apr 6 21:41:20.418: INFO: Created: latency-svc-26grm Apr 6 21:41:20.442: INFO: Got endpoints: latency-svc-26grm [361.894631ms] Apr 6 21:41:20.466: INFO: Created: latency-svc-sdgx7 Apr 6 21:41:20.521: INFO: Got endpoints: latency-svc-sdgx7 [441.335435ms] Apr 6 21:41:20.525: INFO: Created: latency-svc-ddcfj Apr 6 21:41:20.538: INFO: Got endpoints: latency-svc-ddcfj [458.129382ms] Apr 6 21:41:20.562: INFO: Created: latency-svc-nwxvv Apr 6 21:41:20.574: INFO: Got endpoints: latency-svc-nwxvv [494.278649ms] Apr 6 21:41:20.592: INFO: Created: latency-svc-m2mc5 Apr 6 21:41:20.616: INFO: Got endpoints: latency-svc-m2mc5 [536.510579ms] Apr 6 21:41:20.683: INFO: Created: latency-svc-r2z9n Apr 6 21:41:20.706: INFO: Got endpoints: latency-svc-r2z9n [626.060734ms] Apr 6 21:41:20.736: INFO: Created: latency-svc-dpcxb Apr 6 21:41:20.745: INFO: Got endpoints: latency-svc-dpcxb [665.727033ms] Apr 6 21:41:20.833: INFO: Created: latency-svc-6qg2f Apr 6 21:41:20.838: INFO: Got endpoints: latency-svc-6qg2f [758.837824ms] Apr 6 21:41:20.862: INFO: Created: latency-svc-fqfbw Apr 6 21:41:20.878: INFO: Got endpoints: latency-svc-fqfbw [798.494974ms] Apr 6 21:41:20.898: INFO: Created: latency-svc-7mq7v Apr 6 21:41:20.914: INFO: Got endpoints: latency-svc-7mq7v [834.528315ms] Apr 6 21:41:20.976: INFO: Created: latency-svc-28jsx Apr 6 21:41:20.980: INFO: Got endpoints: latency-svc-28jsx [820.524178ms] Apr 6 21:41:21.006: INFO: Created: latency-svc-frjkv Apr 6 21:41:21.032: INFO: Got endpoints: latency-svc-frjkv [813.33721ms] Apr 6 21:41:21.054: INFO: Created: latency-svc-rhpg9 Apr 6 21:41:21.068: INFO: Got endpoints: latency-svc-rhpg9 [811.193268ms] Apr 6 21:41:21.120: INFO: Created: latency-svc-k2chj Apr 6 21:41:21.134: INFO: Got endpoints: latency-svc-k2chj [807.914146ms] Apr 6 21:41:21.155: INFO: Created: latency-svc-7dsq7 Apr 6 21:41:21.164: INFO: Got endpoints: latency-svc-7dsq7 [786.717417ms] Apr 6 21:41:21.216: INFO: Created: latency-svc-l7ql8 Apr 6 21:41:21.258: INFO: Got endpoints: latency-svc-l7ql8 [816.023735ms] Apr 6 21:41:21.281: INFO: Created: latency-svc-sgwrb Apr 6 21:41:21.297: INFO: Got endpoints: latency-svc-sgwrb [776.246851ms] Apr 6 21:41:21.335: INFO: Created: latency-svc-znf5h Apr 6 21:41:21.348: INFO: Got endpoints: latency-svc-znf5h [810.178755ms] Apr 6 21:41:21.395: INFO: Created: latency-svc-kxcq4 Apr 6 21:41:21.403: INFO: Got endpoints: latency-svc-kxcq4 [829.139409ms] Apr 6 21:41:21.431: INFO: Created: latency-svc-7pqj7 Apr 6 21:41:21.445: INFO: Got endpoints: latency-svc-7pqj7 [828.568795ms] Apr 6 21:41:21.486: INFO: Created: latency-svc-tqlz2 Apr 6 21:41:21.611: INFO: Got endpoints: latency-svc-tqlz2 [905.325434ms] Apr 6 21:41:21.685: INFO: Created: latency-svc-22g5m Apr 6 21:41:21.773: INFO: Got endpoints: latency-svc-22g5m [1.027543879s] Apr 6 21:41:21.787: INFO: Created: latency-svc-gswmw Apr 6 21:41:21.799: INFO: Got endpoints: latency-svc-gswmw [960.566889ms] Apr 6 21:41:21.959: INFO: Created: latency-svc-pcp4j Apr 6 21:41:21.962: INFO: Got endpoints: latency-svc-pcp4j [1.083983455s] Apr 6 21:41:22.008: INFO: Created: latency-svc-qbt8v Apr 6 21:41:22.036: INFO: Got endpoints: latency-svc-qbt8v [1.122071594s] Apr 6 21:41:22.145: INFO: Created: latency-svc-wjfg4 Apr 6 21:41:22.150: INFO: Got endpoints: latency-svc-wjfg4 [1.170094306s] Apr 6 21:41:22.188: INFO: Created: latency-svc-zm4hl Apr 6 21:41:22.288: INFO: Got endpoints: latency-svc-zm4hl [1.25544347s] Apr 6 21:41:22.362: INFO: Created: latency-svc-2hhp4 Apr 6 21:41:22.379: INFO: Got endpoints: latency-svc-2hhp4 [1.310941606s] Apr 6 21:41:22.467: INFO: Created: latency-svc-swqsl Apr 6 21:41:22.476: INFO: Got endpoints: latency-svc-swqsl [1.341228409s] Apr 6 21:41:22.524: INFO: Created: latency-svc-bhxsr Apr 6 21:41:22.559: INFO: Got endpoints: latency-svc-bhxsr [1.394721437s] Apr 6 21:41:22.662: INFO: Created: latency-svc-ddjqz Apr 6 21:41:22.704: INFO: Got endpoints: latency-svc-ddjqz [1.446029101s] Apr 6 21:41:22.759: INFO: Created: latency-svc-64vwd Apr 6 21:41:22.782: INFO: Got endpoints: latency-svc-64vwd [1.484460652s] Apr 6 21:41:22.824: INFO: Created: latency-svc-2nxrw Apr 6 21:41:22.891: INFO: Got endpoints: latency-svc-2nxrw [1.542759203s] Apr 6 21:41:22.932: INFO: Created: latency-svc-bbl5h Apr 6 21:41:22.957: INFO: Got endpoints: latency-svc-bbl5h [1.553550655s] Apr 6 21:41:23.060: INFO: Created: latency-svc-g9jpq Apr 6 21:41:23.064: INFO: Got endpoints: latency-svc-g9jpq [1.618650859s] Apr 6 21:41:23.095: INFO: Created: latency-svc-pjl6v Apr 6 21:41:23.119: INFO: Got endpoints: latency-svc-pjl6v [1.507284796s] Apr 6 21:41:23.143: INFO: Created: latency-svc-9bdl6 Apr 6 21:41:23.154: INFO: Got endpoints: latency-svc-9bdl6 [1.381335786s] Apr 6 21:41:23.204: INFO: Created: latency-svc-rwq9w Apr 6 21:41:23.209: INFO: Got endpoints: latency-svc-rwq9w [1.410042289s] Apr 6 21:41:23.245: INFO: Created: latency-svc-56zz6 Apr 6 21:41:23.261: INFO: Got endpoints: latency-svc-56zz6 [1.298271921s] Apr 6 21:41:23.281: INFO: Created: latency-svc-zhk92 Apr 6 21:41:23.291: INFO: Got endpoints: latency-svc-zhk92 [1.2540245s] Apr 6 21:41:23.341: INFO: Created: latency-svc-kqqcl Apr 6 21:41:23.345: INFO: Got endpoints: latency-svc-kqqcl [1.194428712s] Apr 6 21:41:23.394: INFO: Created: latency-svc-pdsdb Apr 6 21:41:23.411: INFO: Got endpoints: latency-svc-pdsdb [1.123080933s] Apr 6 21:41:23.431: INFO: Created: latency-svc-jm7q9 Apr 6 21:41:23.479: INFO: Got endpoints: latency-svc-jm7q9 [1.099439731s] Apr 6 21:41:23.484: INFO: Created: latency-svc-zrw8v Apr 6 21:41:23.490: INFO: Got endpoints: latency-svc-zrw8v [1.014144321s] Apr 6 21:41:23.509: INFO: Created: latency-svc-zkdqw Apr 6 21:41:23.514: INFO: Got endpoints: latency-svc-zkdqw [954.885686ms] Apr 6 21:41:23.550: INFO: Created: latency-svc-xrcmk Apr 6 21:41:23.556: INFO: Got endpoints: latency-svc-xrcmk [852.296427ms] Apr 6 21:41:23.617: INFO: Created: latency-svc-kmvfq Apr 6 21:41:23.634: INFO: Got endpoints: latency-svc-kmvfq [852.252945ms] Apr 6 21:41:23.664: INFO: Created: latency-svc-ldcsp Apr 6 21:41:23.688: INFO: Got endpoints: latency-svc-ldcsp [797.302562ms] Apr 6 21:41:23.749: INFO: Created: latency-svc-dhfrw Apr 6 21:41:23.752: INFO: Got endpoints: latency-svc-dhfrw [795.213881ms] Apr 6 21:41:23.778: INFO: Created: latency-svc-jhtf2 Apr 6 21:41:23.788: INFO: Got endpoints: latency-svc-jhtf2 [724.801382ms] Apr 6 21:41:23.810: INFO: Created: latency-svc-2wvjp Apr 6 21:41:23.819: INFO: Got endpoints: latency-svc-2wvjp [700.169083ms] Apr 6 21:41:23.849: INFO: Created: latency-svc-2rqb9 Apr 6 21:41:23.910: INFO: Got endpoints: latency-svc-2rqb9 [755.887997ms] Apr 6 21:41:23.928: INFO: Created: latency-svc-swbpm Apr 6 21:41:23.958: INFO: Got endpoints: latency-svc-swbpm [749.129023ms] Apr 6 21:41:23.995: INFO: Created: latency-svc-2bqc6 Apr 6 21:41:24.005: INFO: Got endpoints: latency-svc-2bqc6 [744.13834ms] Apr 6 21:41:24.084: INFO: Created: latency-svc-457k7 Apr 6 21:41:24.089: INFO: Got endpoints: latency-svc-457k7 [798.747005ms] Apr 6 21:41:24.115: INFO: Created: latency-svc-sbqrw Apr 6 21:41:24.139: INFO: Got endpoints: latency-svc-sbqrw [794.114115ms] Apr 6 21:41:24.162: INFO: Created: latency-svc-w65cc Apr 6 21:41:24.174: INFO: Got endpoints: latency-svc-w65cc [762.901557ms] Apr 6 21:41:24.222: INFO: Created: latency-svc-ttcdg Apr 6 21:41:24.227: INFO: Got endpoints: latency-svc-ttcdg [748.286149ms] Apr 6 21:41:24.252: INFO: Created: latency-svc-cvpkm Apr 6 21:41:24.276: INFO: Got endpoints: latency-svc-cvpkm [786.520194ms] Apr 6 21:41:24.371: INFO: Created: latency-svc-cp8zn Apr 6 21:41:24.384: INFO: Got endpoints: latency-svc-cp8zn [869.99471ms] Apr 6 21:41:24.432: INFO: Created: latency-svc-h7w75 Apr 6 21:41:24.521: INFO: Got endpoints: latency-svc-h7w75 [965.415558ms] Apr 6 21:41:24.553: INFO: Created: latency-svc-7xv74 Apr 6 21:41:24.606: INFO: Got endpoints: latency-svc-7xv74 [972.030764ms] Apr 6 21:41:24.683: INFO: Created: latency-svc-8rbbl Apr 6 21:41:24.687: INFO: Got endpoints: latency-svc-8rbbl [998.234717ms] Apr 6 21:41:24.738: INFO: Created: latency-svc-9zsjw Apr 6 21:41:24.749: INFO: Got endpoints: latency-svc-9zsjw [997.028013ms] Apr 6 21:41:24.826: INFO: Created: latency-svc-j8g76 Apr 6 21:41:24.829: INFO: Got endpoints: latency-svc-j8g76 [1.040992205s] Apr 6 21:41:24.858: INFO: Created: latency-svc-jdttc Apr 6 21:41:24.870: INFO: Got endpoints: latency-svc-jdttc [1.051540658s] Apr 6 21:41:24.894: INFO: Created: latency-svc-45lk4 Apr 6 21:41:24.905: INFO: Got endpoints: latency-svc-45lk4 [995.098585ms] Apr 6 21:41:24.924: INFO: Created: latency-svc-48mgg Apr 6 21:41:24.982: INFO: Got endpoints: latency-svc-48mgg [1.023773652s] Apr 6 21:41:24.986: INFO: Created: latency-svc-5bq8n Apr 6 21:41:24.990: INFO: Got endpoints: latency-svc-5bq8n [985.225383ms] Apr 6 21:41:25.014: INFO: Created: latency-svc-7lwgl Apr 6 21:41:25.038: INFO: Got endpoints: latency-svc-7lwgl [948.514351ms] Apr 6 21:41:25.068: INFO: Created: latency-svc-d6wcl Apr 6 21:41:25.081: INFO: Got endpoints: latency-svc-d6wcl [941.824506ms] Apr 6 21:41:25.170: INFO: Created: latency-svc-f66xh Apr 6 21:41:25.183: INFO: Got endpoints: latency-svc-f66xh [1.008869466s] Apr 6 21:41:25.218: INFO: Created: latency-svc-tss4p Apr 6 21:41:25.237: INFO: Got endpoints: latency-svc-tss4p [1.009955409s] Apr 6 21:41:25.294: INFO: Created: latency-svc-pp94k Apr 6 21:41:25.297: INFO: Got endpoints: latency-svc-pp94k [1.021033982s] Apr 6 21:41:25.332: INFO: Created: latency-svc-kt2wl Apr 6 21:41:25.346: INFO: Got endpoints: latency-svc-kt2wl [961.594856ms] Apr 6 21:41:25.368: INFO: Created: latency-svc-hp67l Apr 6 21:41:25.382: INFO: Got endpoints: latency-svc-hp67l [860.79568ms] Apr 6 21:41:25.425: INFO: Created: latency-svc-w7xl5 Apr 6 21:41:25.428: INFO: Got endpoints: latency-svc-w7xl5 [821.732799ms] Apr 6 21:41:25.455: INFO: Created: latency-svc-6wwjh Apr 6 21:41:25.467: INFO: Got endpoints: latency-svc-6wwjh [780.115208ms] Apr 6 21:41:25.488: INFO: Created: latency-svc-gv2gt Apr 6 21:41:25.515: INFO: Got endpoints: latency-svc-gv2gt [766.444294ms] Apr 6 21:41:25.563: INFO: Created: latency-svc-gwrjw Apr 6 21:41:25.566: INFO: Got endpoints: latency-svc-gwrjw [736.820803ms] Apr 6 21:41:25.596: INFO: Created: latency-svc-zc5mt Apr 6 21:41:25.612: INFO: Got endpoints: latency-svc-zc5mt [741.237776ms] Apr 6 21:41:25.633: INFO: Created: latency-svc-kmfl5 Apr 6 21:41:25.642: INFO: Got endpoints: latency-svc-kmfl5 [736.579182ms] Apr 6 21:41:25.662: INFO: Created: latency-svc-mhtts Apr 6 21:41:25.701: INFO: Got endpoints: latency-svc-mhtts [718.610546ms] Apr 6 21:41:25.710: INFO: Created: latency-svc-858j4 Apr 6 21:41:25.727: INFO: Got endpoints: latency-svc-858j4 [736.476771ms] Apr 6 21:41:25.746: INFO: Created: latency-svc-mvlq2 Apr 6 21:41:25.757: INFO: Got endpoints: latency-svc-mvlq2 [718.931716ms] Apr 6 21:41:25.779: INFO: Created: latency-svc-fhg4m Apr 6 21:41:25.787: INFO: Got endpoints: latency-svc-fhg4m [706.129866ms] Apr 6 21:41:25.845: INFO: Created: latency-svc-299sb Apr 6 21:41:25.847: INFO: Got endpoints: latency-svc-299sb [664.340373ms] Apr 6 21:41:25.890: INFO: Created: latency-svc-tsrmp Apr 6 21:41:25.902: INFO: Got endpoints: latency-svc-tsrmp [664.703609ms] Apr 6 21:41:25.920: INFO: Created: latency-svc-r9s9k Apr 6 21:41:25.932: INFO: Got endpoints: latency-svc-r9s9k [634.67185ms] Apr 6 21:41:25.994: INFO: Created: latency-svc-6nx9w Apr 6 21:41:25.997: INFO: Got endpoints: latency-svc-6nx9w [650.9236ms] Apr 6 21:41:26.034: INFO: Created: latency-svc-dc2hs Apr 6 21:41:26.047: INFO: Got endpoints: latency-svc-dc2hs [664.394004ms] Apr 6 21:41:26.082: INFO: Created: latency-svc-rcw54 Apr 6 21:41:26.144: INFO: Got endpoints: latency-svc-rcw54 [716.108704ms] Apr 6 21:41:26.147: INFO: Created: latency-svc-p4h62 Apr 6 21:41:26.155: INFO: Got endpoints: latency-svc-p4h62 [688.649719ms] Apr 6 21:41:26.178: INFO: Created: latency-svc-fmp7s Apr 6 21:41:26.192: INFO: Got endpoints: latency-svc-fmp7s [676.303819ms] Apr 6 21:41:26.214: INFO: Created: latency-svc-tslqv Apr 6 21:41:26.238: INFO: Got endpoints: latency-svc-tslqv [671.923392ms] Apr 6 21:41:26.289: INFO: Created: latency-svc-l2fdc Apr 6 21:41:26.310: INFO: Got endpoints: latency-svc-l2fdc [698.423783ms] Apr 6 21:41:26.341: INFO: Created: latency-svc-bfptw Apr 6 21:41:26.376: INFO: Got endpoints: latency-svc-bfptw [733.883391ms] Apr 6 21:41:26.444: INFO: Created: latency-svc-r9tz9 Apr 6 21:41:26.466: INFO: Created: latency-svc-27lqx Apr 6 21:41:26.468: INFO: Got endpoints: latency-svc-r9tz9 [766.825969ms] Apr 6 21:41:26.490: INFO: Got endpoints: latency-svc-27lqx [763.348304ms] Apr 6 21:41:26.526: INFO: Created: latency-svc-bhcf6 Apr 6 21:41:26.587: INFO: Got endpoints: latency-svc-bhcf6 [830.166932ms] Apr 6 21:41:26.616: INFO: Created: latency-svc-f76t4 Apr 6 21:41:26.632: INFO: Got endpoints: latency-svc-f76t4 [844.527944ms] Apr 6 21:41:26.658: INFO: Created: latency-svc-dhkcv Apr 6 21:41:26.719: INFO: Got endpoints: latency-svc-dhkcv [871.582549ms] Apr 6 21:41:26.742: INFO: Created: latency-svc-hgbk5 Apr 6 21:41:26.758: INFO: Got endpoints: latency-svc-hgbk5 [856.018831ms] Apr 6 21:41:26.778: INFO: Created: latency-svc-v9ftd Apr 6 21:41:26.806: INFO: Got endpoints: latency-svc-v9ftd [874.32153ms] Apr 6 21:41:26.869: INFO: Created: latency-svc-wrvfs Apr 6 21:41:26.895: INFO: Got endpoints: latency-svc-wrvfs [898.332642ms] Apr 6 21:41:26.928: INFO: Created: latency-svc-ptmh8 Apr 6 21:41:26.942: INFO: Got endpoints: latency-svc-ptmh8 [894.785146ms] Apr 6 21:41:26.958: INFO: Created: latency-svc-4h4pc Apr 6 21:41:27.006: INFO: Got endpoints: latency-svc-4h4pc [861.564464ms] Apr 6 21:41:27.017: INFO: Created: latency-svc-66qqv Apr 6 21:41:27.030: INFO: Got endpoints: latency-svc-66qqv [874.25223ms] Apr 6 21:41:27.048: INFO: Created: latency-svc-9d487 Apr 6 21:41:27.060: INFO: Got endpoints: latency-svc-9d487 [868.359269ms] Apr 6 21:41:27.078: INFO: Created: latency-svc-h824k Apr 6 21:41:27.090: INFO: Got endpoints: latency-svc-h824k [852.058722ms] Apr 6 21:41:27.144: INFO: Created: latency-svc-bv4tv Apr 6 21:41:27.148: INFO: Got endpoints: latency-svc-bv4tv [837.614307ms] Apr 6 21:41:27.186: INFO: Created: latency-svc-jnrq7 Apr 6 21:41:27.199: INFO: Got endpoints: latency-svc-jnrq7 [822.603506ms] Apr 6 21:41:27.222: INFO: Created: latency-svc-m24rv Apr 6 21:41:27.282: INFO: Got endpoints: latency-svc-m24rv [813.948615ms] Apr 6 21:41:27.287: INFO: Created: latency-svc-wk2br Apr 6 21:41:27.314: INFO: Got endpoints: latency-svc-wk2br [823.644274ms] Apr 6 21:41:27.336: INFO: Created: latency-svc-cs5kj Apr 6 21:41:27.350: INFO: Got endpoints: latency-svc-cs5kj [762.91114ms] Apr 6 21:41:27.372: INFO: Created: latency-svc-kwqzq Apr 6 21:41:27.419: INFO: Got endpoints: latency-svc-kwqzq [787.247576ms] Apr 6 21:41:27.432: INFO: Created: latency-svc-p5l9l Apr 6 21:41:27.447: INFO: Got endpoints: latency-svc-p5l9l [727.489643ms] Apr 6 21:41:27.480: INFO: Created: latency-svc-mr9kv Apr 6 21:41:27.495: INFO: Got endpoints: latency-svc-mr9kv [736.559884ms] Apr 6 21:41:27.515: INFO: Created: latency-svc-8csfr Apr 6 21:41:27.563: INFO: Got endpoints: latency-svc-8csfr [756.405416ms] Apr 6 21:41:27.570: INFO: Created: latency-svc-kb5bd Apr 6 21:41:27.585: INFO: Got endpoints: latency-svc-kb5bd [690.069607ms] Apr 6 21:41:27.630: INFO: Created: latency-svc-g5f59 Apr 6 21:41:27.639: INFO: Got endpoints: latency-svc-g5f59 [697.866271ms] Apr 6 21:41:27.663: INFO: Created: latency-svc-f27mh Apr 6 21:41:27.737: INFO: Got endpoints: latency-svc-f27mh [731.084491ms] Apr 6 21:41:27.738: INFO: Created: latency-svc-557qt Apr 6 21:41:27.742: INFO: Got endpoints: latency-svc-557qt [712.776888ms] Apr 6 21:41:27.762: INFO: Created: latency-svc-jbx42 Apr 6 21:41:27.779: INFO: Got endpoints: latency-svc-jbx42 [718.409424ms] Apr 6 21:41:27.798: INFO: Created: latency-svc-8jvc2 Apr 6 21:41:27.809: INFO: Got endpoints: latency-svc-8jvc2 [718.689686ms] Apr 6 21:41:27.828: INFO: Created: latency-svc-rkv4h Apr 6 21:41:27.886: INFO: Got endpoints: latency-svc-rkv4h [738.588243ms] Apr 6 21:41:27.905: INFO: Created: latency-svc-jj8qw Apr 6 21:41:27.936: INFO: Got endpoints: latency-svc-jj8qw [736.98263ms] Apr 6 21:41:27.966: INFO: Created: latency-svc-k98gh Apr 6 21:41:28.030: INFO: Got endpoints: latency-svc-k98gh [748.571783ms] Apr 6 21:41:28.050: INFO: Created: latency-svc-d8nmp Apr 6 21:41:28.068: INFO: Got endpoints: latency-svc-d8nmp [754.009721ms] Apr 6 21:41:28.092: INFO: Created: latency-svc-dg5s5 Apr 6 21:41:28.104: INFO: Got endpoints: latency-svc-dg5s5 [754.054149ms] Apr 6 21:41:28.122: INFO: Created: latency-svc-f7btf Apr 6 21:41:28.155: INFO: Got endpoints: latency-svc-f7btf [736.462037ms] Apr 6 21:41:28.170: INFO: Created: latency-svc-f5wtm Apr 6 21:41:28.183: INFO: Got endpoints: latency-svc-f5wtm [736.5236ms] Apr 6 21:41:28.206: INFO: Created: latency-svc-bnqdx Apr 6 21:41:28.219: INFO: Got endpoints: latency-svc-bnqdx [724.552381ms] Apr 6 21:41:28.242: INFO: Created: latency-svc-ntc5v Apr 6 21:41:28.255: INFO: Got endpoints: latency-svc-ntc5v [692.347281ms] Apr 6 21:41:28.301: INFO: Created: latency-svc-kw9zb Apr 6 21:41:28.310: INFO: Got endpoints: latency-svc-kw9zb [724.338136ms] Apr 6 21:41:28.332: INFO: Created: latency-svc-hckr2 Apr 6 21:41:28.346: INFO: Got endpoints: latency-svc-hckr2 [706.583674ms] Apr 6 21:41:28.368: INFO: Created: latency-svc-lls2c Apr 6 21:41:28.382: INFO: Got endpoints: latency-svc-lls2c [645.273847ms] Apr 6 21:41:28.444: INFO: Created: latency-svc-sgcsm Apr 6 21:41:28.448: INFO: Got endpoints: latency-svc-sgcsm [705.988462ms] Apr 6 21:41:28.477: INFO: Created: latency-svc-x9n8t Apr 6 21:41:28.491: INFO: Got endpoints: latency-svc-x9n8t [712.548409ms] Apr 6 21:41:28.512: INFO: Created: latency-svc-896vj Apr 6 21:41:28.528: INFO: Got endpoints: latency-svc-896vj [718.366288ms] Apr 6 21:41:28.587: INFO: Created: latency-svc-58cmd Apr 6 21:41:28.614: INFO: Got endpoints: latency-svc-58cmd [727.927193ms] Apr 6 21:41:28.615: INFO: Created: latency-svc-cjjqr Apr 6 21:41:28.624: INFO: Got endpoints: latency-svc-cjjqr [688.149683ms] Apr 6 21:41:28.644: INFO: Created: latency-svc-s87hq Apr 6 21:41:28.661: INFO: Got endpoints: latency-svc-s87hq [630.374403ms] Apr 6 21:41:28.755: INFO: Created: latency-svc-xqs6h Apr 6 21:41:28.770: INFO: Got endpoints: latency-svc-xqs6h [702.11557ms] Apr 6 21:41:28.800: INFO: Created: latency-svc-jjsbx Apr 6 21:41:28.910: INFO: Got endpoints: latency-svc-jjsbx [806.037866ms] Apr 6 21:41:28.914: INFO: Created: latency-svc-bjxs5 Apr 6 21:41:28.919: INFO: Got endpoints: latency-svc-bjxs5 [763.345107ms] Apr 6 21:41:28.938: INFO: Created: latency-svc-5kbt4 Apr 6 21:41:28.949: INFO: Got endpoints: latency-svc-5kbt4 [765.561515ms] Apr 6 21:41:28.974: INFO: Created: latency-svc-vvz24 Apr 6 21:41:28.986: INFO: Got endpoints: latency-svc-vvz24 [766.146463ms] Apr 6 21:41:29.003: INFO: Created: latency-svc-htcvd Apr 6 21:41:29.054: INFO: Got endpoints: latency-svc-htcvd [798.28788ms] Apr 6 21:41:29.058: INFO: Created: latency-svc-6765f Apr 6 21:41:29.070: INFO: Got endpoints: latency-svc-6765f [760.463379ms] Apr 6 21:41:29.088: INFO: Created: latency-svc-xb9qj Apr 6 21:41:29.100: INFO: Got endpoints: latency-svc-xb9qj [754.248714ms] Apr 6 21:41:29.118: INFO: Created: latency-svc-wj2bt Apr 6 21:41:29.131: INFO: Got endpoints: latency-svc-wj2bt [748.652424ms] Apr 6 21:41:29.148: INFO: Created: latency-svc-qbs4h Apr 6 21:41:29.192: INFO: Got endpoints: latency-svc-qbs4h [743.137664ms] Apr 6 21:41:29.202: INFO: Created: latency-svc-dsqjx Apr 6 21:41:29.216: INFO: Got endpoints: latency-svc-dsqjx [724.332538ms] Apr 6 21:41:29.252: INFO: Created: latency-svc-dw86p Apr 6 21:41:29.286: INFO: Got endpoints: latency-svc-dw86p [758.673173ms] Apr 6 21:41:29.348: INFO: Created: latency-svc-cz9zv Apr 6 21:41:29.361: INFO: Got endpoints: latency-svc-cz9zv [746.612915ms] Apr 6 21:41:29.383: INFO: Created: latency-svc-qljfj Apr 6 21:41:29.392: INFO: Got endpoints: latency-svc-qljfj [767.570589ms] Apr 6 21:41:29.413: INFO: Created: latency-svc-xwm4n Apr 6 21:41:29.422: INFO: Got endpoints: latency-svc-xwm4n [761.246096ms] Apr 6 21:41:29.474: INFO: Created: latency-svc-w7jdt Apr 6 21:41:29.482: INFO: Got endpoints: latency-svc-w7jdt [712.165636ms] Apr 6 21:41:29.532: INFO: Created: latency-svc-fj8q5 Apr 6 21:41:29.549: INFO: Got endpoints: latency-svc-fj8q5 [638.455083ms] Apr 6 21:41:29.610: INFO: Created: latency-svc-sxrfg Apr 6 21:41:29.627: INFO: Got endpoints: latency-svc-sxrfg [707.621161ms] Apr 6 21:41:29.653: INFO: Created: latency-svc-nbgzc Apr 6 21:41:29.676: INFO: Got endpoints: latency-svc-nbgzc [726.941597ms] Apr 6 21:41:29.737: INFO: Created: latency-svc-mxnjn Apr 6 21:41:29.740: INFO: Got endpoints: latency-svc-mxnjn [754.105103ms] Apr 6 21:41:29.760: INFO: Created: latency-svc-xjzjk Apr 6 21:41:29.771: INFO: Got endpoints: latency-svc-xjzjk [717.680573ms] Apr 6 21:41:29.796: INFO: Created: latency-svc-6bdh5 Apr 6 21:41:29.808: INFO: Got endpoints: latency-svc-6bdh5 [737.535443ms] Apr 6 21:41:29.827: INFO: Created: latency-svc-7578n Apr 6 21:41:29.868: INFO: Got endpoints: latency-svc-7578n [767.553717ms] Apr 6 21:41:29.873: INFO: Created: latency-svc-kvgmx Apr 6 21:41:29.886: INFO: Got endpoints: latency-svc-kvgmx [755.248827ms] Apr 6 21:41:29.904: INFO: Created: latency-svc-962ks Apr 6 21:41:29.917: INFO: Got endpoints: latency-svc-962ks [724.855863ms] Apr 6 21:41:29.934: INFO: Created: latency-svc-g2644 Apr 6 21:41:29.958: INFO: Got endpoints: latency-svc-g2644 [742.09008ms] Apr 6 21:41:30.018: INFO: Created: latency-svc-f7w76 Apr 6 21:41:30.021: INFO: Got endpoints: latency-svc-f7w76 [734.644931ms] Apr 6 21:41:30.060: INFO: Created: latency-svc-wnx9r Apr 6 21:41:30.074: INFO: Got endpoints: latency-svc-wnx9r [712.423245ms] Apr 6 21:41:30.108: INFO: Created: latency-svc-smfpn Apr 6 21:41:30.168: INFO: Got endpoints: latency-svc-smfpn [776.169968ms] Apr 6 21:41:30.170: INFO: Created: latency-svc-kw6gp Apr 6 21:41:30.176: INFO: Got endpoints: latency-svc-kw6gp [753.864376ms] Apr 6 21:41:30.198: INFO: Created: latency-svc-95pzf Apr 6 21:41:30.207: INFO: Got endpoints: latency-svc-95pzf [724.418307ms] Apr 6 21:41:30.234: INFO: Created: latency-svc-nhxcq Apr 6 21:41:30.249: INFO: Got endpoints: latency-svc-nhxcq [700.324386ms] Apr 6 21:41:30.306: INFO: Created: latency-svc-x8jq9 Apr 6 21:41:30.336: INFO: Got endpoints: latency-svc-x8jq9 [709.463927ms] Apr 6 21:41:30.337: INFO: Created: latency-svc-pfdvj Apr 6 21:41:30.366: INFO: Got endpoints: latency-svc-pfdvj [690.208356ms] Apr 6 21:41:30.444: INFO: Created: latency-svc-gt64l Apr 6 21:41:30.493: INFO: Got endpoints: latency-svc-gt64l [752.803112ms] Apr 6 21:41:30.523: INFO: Created: latency-svc-mp86b Apr 6 21:41:30.538: INFO: Got endpoints: latency-svc-mp86b [766.972935ms] Apr 6 21:41:30.593: INFO: Created: latency-svc-swqcx Apr 6 21:41:30.618: INFO: Got endpoints: latency-svc-swqcx [809.972327ms] Apr 6 21:41:30.618: INFO: Created: latency-svc-88frs Apr 6 21:41:30.629: INFO: Got endpoints: latency-svc-88frs [760.484479ms] Apr 6 21:41:30.654: INFO: Created: latency-svc-t5t5j Apr 6 21:41:30.665: INFO: Got endpoints: latency-svc-t5t5j [778.840668ms] Apr 6 21:41:30.684: INFO: Created: latency-svc-st4dl Apr 6 21:41:30.725: INFO: Got endpoints: latency-svc-st4dl [808.563626ms] Apr 6 21:41:30.744: INFO: Created: latency-svc-mvfcp Apr 6 21:41:30.780: INFO: Got endpoints: latency-svc-mvfcp [822.516017ms] Apr 6 21:41:30.822: INFO: Created: latency-svc-sp8mj Apr 6 21:41:30.880: INFO: Got endpoints: latency-svc-sp8mj [859.331695ms] Apr 6 21:41:30.888: INFO: Created: latency-svc-jvh88 Apr 6 21:41:30.900: INFO: Got endpoints: latency-svc-jvh88 [826.144146ms] Apr 6 21:41:30.924: INFO: Created: latency-svc-xvtk8 Apr 6 21:41:30.948: INFO: Got endpoints: latency-svc-xvtk8 [780.318936ms] Apr 6 21:41:31.024: INFO: Created: latency-svc-72tqw Apr 6 21:41:31.028: INFO: Got endpoints: latency-svc-72tqw [851.654382ms] Apr 6 21:41:31.062: INFO: Created: latency-svc-6zbjv Apr 6 21:41:31.075: INFO: Got endpoints: latency-svc-6zbjv [868.262388ms] Apr 6 21:41:31.105: INFO: Created: latency-svc-t5kxk Apr 6 21:41:31.124: INFO: Got endpoints: latency-svc-t5kxk [875.118763ms] Apr 6 21:41:31.174: INFO: Created: latency-svc-jfgdm Apr 6 21:41:31.179: INFO: Got endpoints: latency-svc-jfgdm [843.36005ms] Apr 6 21:41:31.206: INFO: Created: latency-svc-rl4p7 Apr 6 21:41:31.220: INFO: Got endpoints: latency-svc-rl4p7 [854.351254ms] Apr 6 21:41:31.248: INFO: Created: latency-svc-77rn9 Apr 6 21:41:31.262: INFO: Got endpoints: latency-svc-77rn9 [769.249743ms] Apr 6 21:41:31.324: INFO: Created: latency-svc-rr777 Apr 6 21:41:31.331: INFO: Got endpoints: latency-svc-rr777 [792.889903ms] Apr 6 21:41:31.362: INFO: Created: latency-svc-qrq7w Apr 6 21:41:31.377: INFO: Got endpoints: latency-svc-qrq7w [759.03975ms] Apr 6 21:41:31.377: INFO: Latencies: [80.139663ms 139.62488ms 177.72696ms 246.651048ms 298.056297ms 361.894631ms 441.335435ms 458.129382ms 494.278649ms 536.510579ms 626.060734ms 630.374403ms 634.67185ms 638.455083ms 645.273847ms 650.9236ms 664.340373ms 664.394004ms 664.703609ms 665.727033ms 671.923392ms 676.303819ms 688.149683ms 688.649719ms 690.069607ms 690.208356ms 692.347281ms 697.866271ms 698.423783ms 700.169083ms 700.324386ms 702.11557ms 705.988462ms 706.129866ms 706.583674ms 707.621161ms 709.463927ms 712.165636ms 712.423245ms 712.548409ms 712.776888ms 716.108704ms 717.680573ms 718.366288ms 718.409424ms 718.610546ms 718.689686ms 718.931716ms 724.332538ms 724.338136ms 724.418307ms 724.552381ms 724.801382ms 724.855863ms 726.941597ms 727.489643ms 727.927193ms 731.084491ms 733.883391ms 734.644931ms 736.462037ms 736.476771ms 736.5236ms 736.559884ms 736.579182ms 736.820803ms 736.98263ms 737.535443ms 738.588243ms 741.237776ms 742.09008ms 743.137664ms 744.13834ms 746.612915ms 748.286149ms 748.571783ms 748.652424ms 749.129023ms 752.803112ms 753.864376ms 754.009721ms 754.054149ms 754.105103ms 754.248714ms 755.248827ms 755.887997ms 756.405416ms 758.673173ms 758.837824ms 759.03975ms 760.463379ms 760.484479ms 761.246096ms 762.901557ms 762.91114ms 763.345107ms 763.348304ms 765.561515ms 766.146463ms 766.444294ms 766.825969ms 766.972935ms 767.553717ms 767.570589ms 769.249743ms 776.169968ms 776.246851ms 778.840668ms 780.115208ms 780.318936ms 786.520194ms 786.717417ms 787.247576ms 792.889903ms 794.114115ms 795.213881ms 797.302562ms 798.28788ms 798.494974ms 798.747005ms 806.037866ms 807.914146ms 808.563626ms 809.972327ms 810.178755ms 811.193268ms 813.33721ms 813.948615ms 816.023735ms 820.524178ms 821.732799ms 822.516017ms 822.603506ms 823.644274ms 826.144146ms 828.568795ms 829.139409ms 830.166932ms 834.528315ms 837.614307ms 843.36005ms 844.527944ms 851.654382ms 852.058722ms 852.252945ms 852.296427ms 854.351254ms 856.018831ms 859.331695ms 860.79568ms 861.564464ms 868.262388ms 868.359269ms 869.99471ms 871.582549ms 874.25223ms 874.32153ms 875.118763ms 894.785146ms 898.332642ms 905.325434ms 941.824506ms 948.514351ms 954.885686ms 960.566889ms 961.594856ms 965.415558ms 972.030764ms 985.225383ms 995.098585ms 997.028013ms 998.234717ms 1.008869466s 1.009955409s 1.014144321s 1.021033982s 1.023773652s 1.027543879s 1.040992205s 1.051540658s 1.083983455s 1.099439731s 1.122071594s 1.123080933s 1.170094306s 1.194428712s 1.2540245s 1.25544347s 1.298271921s 1.310941606s 1.341228409s 1.381335786s 1.394721437s 1.410042289s 1.446029101s 1.484460652s 1.507284796s 1.542759203s 1.553550655s 1.618650859s] Apr 6 21:41:31.377: INFO: 50 %ile: 766.825969ms Apr 6 21:41:31.377: INFO: 90 %ile: 1.083983455s Apr 6 21:41:31.377: INFO: 99 %ile: 1.553550655s Apr 6 21:41:31.377: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:41:31.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7078" for this suite. • [SLOW TEST:15.605 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":144,"skipped":2473,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:41:31.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 6 21:41:31.463: INFO: Waiting up to 5m0s for pod "downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666" in namespace "downward-api-8606" to be "success or failure" Apr 6 21:41:31.484: INFO: Pod "downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666": Phase="Pending", Reason="", readiness=false. Elapsed: 21.257545ms Apr 6 21:41:33.489: INFO: Pod "downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025792854s Apr 6 21:41:35.493: INFO: Pod "downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030136288s STEP: Saw pod success Apr 6 21:41:35.493: INFO: Pod "downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666" satisfied condition "success or failure" Apr 6 21:41:35.496: INFO: Trying to get logs from node jerma-worker2 pod downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666 container dapi-container: STEP: delete the pod Apr 6 21:41:35.528: INFO: Waiting for pod downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666 to disappear Apr 6 21:41:35.532: INFO: Pod downward-api-0023aaec-9ba0-412e-b93d-27ffb72ef666 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:41:35.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8606" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2493,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:41:35.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 6 21:41:35.612: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:41:52.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2473" for this suite. • [SLOW TEST:17.051 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":146,"skipped":2511,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:41:52.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 6 21:41:52.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7527' Apr 6 21:41:52.911: INFO: stderr: "" Apr 6 21:41:52.911: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 6 21:41:52.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7527' Apr 6 21:41:53.025: INFO: stderr: "" Apr 6 21:41:53.025: INFO: stdout: "update-demo-nautilus-5tns2 update-demo-nautilus-wkcpv " Apr 6 21:41:53.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5tns2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:41:53.111: INFO: stderr: "" Apr 6 21:41:53.111: INFO: stdout: "" Apr 6 21:41:53.111: INFO: update-demo-nautilus-5tns2 is created but not running Apr 6 21:41:58.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7527' Apr 6 21:41:58.210: INFO: stderr: "" Apr 6 21:41:58.211: INFO: stdout: "update-demo-nautilus-5tns2 update-demo-nautilus-wkcpv " Apr 6 21:41:58.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5tns2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:41:58.298: INFO: stderr: "" Apr 6 21:41:58.298: INFO: stdout: "true" Apr 6 21:41:58.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5tns2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:41:58.394: INFO: stderr: "" Apr 6 21:41:58.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:41:58.394: INFO: validating pod update-demo-nautilus-5tns2 Apr 6 21:41:58.399: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:41:58.399: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:41:58.399: INFO: update-demo-nautilus-5tns2 is verified up and running Apr 6 21:41:58.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wkcpv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:41:58.498: INFO: stderr: "" Apr 6 21:41:58.498: INFO: stdout: "true" Apr 6 21:41:58.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wkcpv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:41:58.592: INFO: stderr: "" Apr 6 21:41:58.592: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:41:58.592: INFO: validating pod update-demo-nautilus-wkcpv Apr 6 21:41:58.596: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:41:58.596: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:41:58.596: INFO: update-demo-nautilus-wkcpv is verified up and running STEP: rolling-update to new replication controller Apr 6 21:41:58.598: INFO: scanned /root for discovery docs: Apr 6 21:41:58.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7527' Apr 6 21:42:21.254: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 6 21:42:21.254: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 6 21:42:21.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7527' Apr 6 21:42:21.345: INFO: stderr: "" Apr 6 21:42:21.345: INFO: stdout: "update-demo-kitten-k5kdc update-demo-kitten-lkhgc " Apr 6 21:42:21.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-k5kdc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:42:21.428: INFO: stderr: "" Apr 6 21:42:21.428: INFO: stdout: "true" Apr 6 21:42:21.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-k5kdc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:42:21.518: INFO: stderr: "" Apr 6 21:42:21.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 6 21:42:21.518: INFO: validating pod update-demo-kitten-k5kdc Apr 6 21:42:21.521: INFO: got data: { "image": "kitten.jpg" } Apr 6 21:42:21.521: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 6 21:42:21.521: INFO: update-demo-kitten-k5kdc is verified up and running Apr 6 21:42:21.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lkhgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:42:21.623: INFO: stderr: "" Apr 6 21:42:21.623: INFO: stdout: "true" Apr 6 21:42:21.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lkhgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7527' Apr 6 21:42:21.718: INFO: stderr: "" Apr 6 21:42:21.718: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 6 21:42:21.718: INFO: validating pod update-demo-kitten-lkhgc Apr 6 21:42:21.727: INFO: got data: { "image": "kitten.jpg" } Apr 6 21:42:21.727: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 6 21:42:21.727: INFO: update-demo-kitten-lkhgc is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:42:21.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7527" for this suite. • [SLOW TEST:29.144 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":147,"skipped":2513,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:42:21.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 6 21:42:21.788: INFO: Waiting up to 5m0s for pod "pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0" in namespace "emptydir-5959" to be "success or failure" Apr 6 21:42:21.791: INFO: Pod "pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164351ms Apr 6 21:42:23.795: INFO: Pod "pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006901793s Apr 6 21:42:25.799: INFO: Pod "pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011209956s STEP: Saw pod success Apr 6 21:42:25.799: INFO: Pod "pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0" satisfied condition "success or failure" Apr 6 21:42:25.803: INFO: Trying to get logs from node jerma-worker2 pod pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0 container test-container: STEP: delete the pod Apr 6 21:42:25.823: INFO: Waiting for pod pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0 to disappear Apr 6 21:42:25.828: INFO: Pod pod-c13e9992-04d1-40c7-846b-0a67ceac7fc0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:42:25.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5959" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2521,"failed":0} SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:42:25.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-29fe9927-f539-4995-b548-25d56bc4b39a [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:42:25.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6220" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":149,"skipped":2523,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:42:25.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 6 21:42:26.014: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:26.031: INFO: Number of nodes with available pods: 0 Apr 6 21:42:26.031: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:27.087: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:27.259: INFO: Number of nodes with available pods: 0 Apr 6 21:42:27.259: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:28.331: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:28.380: INFO: Number of nodes with available pods: 0 Apr 6 21:42:28.380: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:29.051: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:29.151: INFO: Number of nodes with available pods: 0 Apr 6 21:42:29.151: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:30.050: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:30.062: INFO: Number of nodes with available pods: 0 Apr 6 21:42:30.062: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:31.039: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:31.050: INFO: Number of nodes with available pods: 2 Apr 6 21:42:31.050: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 6 21:42:31.159: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:31.163: INFO: Number of nodes with available pods: 1 Apr 6 21:42:31.163: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:32.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:32.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:32.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:33.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:33.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:33.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:34.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:34.171: INFO: Number of nodes with available pods: 1 Apr 6 21:42:34.171: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:35.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:35.171: INFO: Number of nodes with available pods: 1 Apr 6 21:42:35.171: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:36.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:36.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:36.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:37.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:37.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:37.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:38.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:38.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:38.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:39.166: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:39.169: INFO: Number of nodes with available pods: 1 Apr 6 21:42:39.169: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:40.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:40.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:40.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:41.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:41.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:41.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:42.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:42.172: INFO: Number of nodes with available pods: 1 Apr 6 21:42:42.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:42:43.168: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:42:43.172: INFO: Number of nodes with available pods: 2 Apr 6 21:42:43.172: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7974, will wait for the garbage collector to delete the pods Apr 6 21:42:43.233: INFO: Deleting DaemonSet.extensions daemon-set took: 4.842697ms Apr 6 21:42:43.633: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.268059ms Apr 6 21:42:49.252: INFO: Number of nodes with available pods: 0 Apr 6 21:42:49.252: INFO: Number of running nodes: 0, number of available pods: 0 Apr 6 21:42:49.256: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7974/daemonsets","resourceVersion":"5982614"},"items":null} Apr 6 21:42:49.259: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7974/pods","resourceVersion":"5982614"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:42:49.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7974" for this suite. • [SLOW TEST:23.368 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":150,"skipped":2545,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:42:49.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:42:49.393: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-fd78ae2b-c6e2-4e17-a919-681826975a70" in namespace "security-context-test-5653" to be "success or failure" Apr 6 21:42:49.398: INFO: Pod "busybox-privileged-false-fd78ae2b-c6e2-4e17-a919-681826975a70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.93405ms Apr 6 21:42:51.403: INFO: Pod "busybox-privileged-false-fd78ae2b-c6e2-4e17-a919-681826975a70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010433132s Apr 6 21:42:53.408: INFO: Pod "busybox-privileged-false-fd78ae2b-c6e2-4e17-a919-681826975a70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014782755s Apr 6 21:42:53.408: INFO: Pod "busybox-privileged-false-fd78ae2b-c6e2-4e17-a919-681826975a70" satisfied condition "success or failure" Apr 6 21:42:53.415: INFO: Got logs for pod "busybox-privileged-false-fd78ae2b-c6e2-4e17-a919-681826975a70": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:42:53.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5653" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2553,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:42:53.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-a1aa6620-50b2-41d8-8b99-eaa1525b31ba STEP: Creating a pod to test consume configMaps Apr 6 21:42:53.645: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc" in namespace "projected-6531" to be "success or failure" Apr 6 21:42:53.655: INFO: Pod "pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064007ms Apr 6 21:42:55.659: INFO: Pod "pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01356753s Apr 6 21:42:57.663: INFO: Pod "pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017761469s STEP: Saw pod success Apr 6 21:42:57.663: INFO: Pod "pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc" satisfied condition "success or failure" Apr 6 21:42:57.666: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc container projected-configmap-volume-test: STEP: delete the pod Apr 6 21:42:57.686: INFO: Waiting for pod pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc to disappear Apr 6 21:42:57.712: INFO: Pod pod-projected-configmaps-638cafb0-3fb9-4ada-a50d-0c9337bd22fc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:42:57.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6531" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2572,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:42:57.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:43:08.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9716" for this suite. • [SLOW TEST:11.129 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":153,"skipped":2581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:43:08.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 6 21:43:08.919: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:43:08.959: INFO: Number of nodes with available pods: 0 Apr 6 21:43:08.959: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:43:09.964: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:43:09.968: INFO: Number of nodes with available pods: 0 Apr 6 21:43:09.968: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:43:10.978: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:43:10.981: INFO: Number of nodes with available pods: 0 Apr 6 21:43:10.981: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:43:11.964: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:43:11.968: INFO: Number of nodes with available pods: 0 Apr 6 21:43:11.968: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:43:12.965: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:43:12.968: INFO: Number of nodes with available pods: 2 Apr 6 21:43:12.968: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 6 21:43:12.986: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 21:43:12.991: INFO: Number of nodes with available pods: 2 Apr 6 21:43:12.991: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9854, will wait for the garbage collector to delete the pods Apr 6 21:43:14.089: INFO: Deleting DaemonSet.extensions daemon-set took: 15.147952ms Apr 6 21:43:14.389: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.225404ms Apr 6 21:43:19.492: INFO: Number of nodes with available pods: 0 Apr 6 21:43:19.492: INFO: Number of running nodes: 0, number of available pods: 0 Apr 6 21:43:19.494: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9854/daemonsets","resourceVersion":"5982848"},"items":null} Apr 6 21:43:19.497: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9854/pods","resourceVersion":"5982848"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:43:19.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9854" for this suite. • [SLOW TEST:10.664 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":154,"skipped":2618,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:43:19.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:43:19.644: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"434f762d-3189-43fb-96e3-365f367856e0", Controller:(*bool)(0xc00338e002), BlockOwnerDeletion:(*bool)(0xc00338e003)}} Apr 6 21:43:19.656: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6ac02131-a23a-4774-b3da-9efe633ecfe1", Controller:(*bool)(0xc00338e1b2), BlockOwnerDeletion:(*bool)(0xc00338e1b3)}} Apr 6 21:43:19.684: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"71a9b6a4-9536-43d2-942d-8be96359538c", Controller:(*bool)(0xc00338e3aa), BlockOwnerDeletion:(*bool)(0xc00338e3ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:43:24.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3378" for this suite. • [SLOW TEST:5.254 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":155,"skipped":2629,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:43:24.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 6 21:43:28.915: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:43:28.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9273" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2630,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:43:28.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 6 21:43:29.068: INFO: >>> kubeConfig: /root/.kube/config Apr 6 21:43:31.984: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:43:42.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3106" for this suite. • [SLOW TEST:13.520 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":157,"skipped":2633,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:43:42.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:43:42.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d" in namespace "projected-4752" to be "success or failure" Apr 6 21:43:42.639: INFO: Pod "downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.033396ms Apr 6 21:43:44.643: INFO: Pod "downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020524264s Apr 6 21:43:46.647: INFO: Pod "downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025047321s STEP: Saw pod success Apr 6 21:43:46.647: INFO: Pod "downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d" satisfied condition "success or failure" Apr 6 21:43:46.651: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d container client-container: STEP: delete the pod Apr 6 21:43:46.682: INFO: Waiting for pod downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d to disappear Apr 6 21:43:46.686: INFO: Pod downwardapi-volume-ac2a12e0-f35a-4e5a-92c1-6cea39d7777d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:43:46.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4752" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:43:46.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-w5v2 STEP: Creating a pod to test atomic-volume-subpath Apr 6 21:43:46.778: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-w5v2" in namespace "subpath-3868" to be "success or failure" Apr 6 21:43:46.807: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.200312ms Apr 6 21:43:48.812: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033974587s Apr 6 21:43:50.816: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 4.037882981s Apr 6 21:43:52.819: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 6.041482685s Apr 6 21:43:54.824: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 8.045796205s Apr 6 21:43:56.828: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 10.05049409s Apr 6 21:43:58.833: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 12.054796531s Apr 6 21:44:00.837: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 14.059272548s Apr 6 21:44:02.841: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 16.06359681s Apr 6 21:44:04.846: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 18.068077672s Apr 6 21:44:06.850: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 20.072335836s Apr 6 21:44:08.855: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Running", Reason="", readiness=true. Elapsed: 22.076756185s Apr 6 21:44:10.859: INFO: Pod "pod-subpath-test-projected-w5v2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.081005954s STEP: Saw pod success Apr 6 21:44:10.859: INFO: Pod "pod-subpath-test-projected-w5v2" satisfied condition "success or failure" Apr 6 21:44:10.862: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-w5v2 container test-container-subpath-projected-w5v2: STEP: delete the pod Apr 6 21:44:10.885: INFO: Waiting for pod pod-subpath-test-projected-w5v2 to disappear Apr 6 21:44:10.890: INFO: Pod pod-subpath-test-projected-w5v2 no longer exists STEP: Deleting pod pod-subpath-test-projected-w5v2 Apr 6 21:44:10.890: INFO: Deleting pod "pod-subpath-test-projected-w5v2" in namespace "subpath-3868" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:44:10.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3868" for this suite. • [SLOW TEST:24.206 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":159,"skipped":2711,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:44:10.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 6 21:44:10.982: INFO: Waiting up to 5m0s for pod "client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d" in namespace "containers-6797" to be "success or failure" Apr 6 21:44:11.003: INFO: Pod "client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.108378ms Apr 6 21:44:13.007: INFO: Pod "client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024962737s Apr 6 21:44:15.012: INFO: Pod "client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029279542s STEP: Saw pod success Apr 6 21:44:15.012: INFO: Pod "client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d" satisfied condition "success or failure" Apr 6 21:44:15.015: INFO: Trying to get logs from node jerma-worker2 pod client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d container test-container: STEP: delete the pod Apr 6 21:44:15.036: INFO: Waiting for pod client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d to disappear Apr 6 21:44:15.040: INFO: Pod client-containers-1a091892-6d1e-49af-9c69-102237fd1c7d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:44:15.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6797" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2719,"failed":0} ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:44:15.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 6 21:44:19.162: INFO: &Pod{ObjectMeta:{send-events-76e5c762-5989-47f2-af12-7bbabd976ba1 events-1765 /api/v1/namespaces/events-1765/pods/send-events-76e5c762-5989-47f2-af12-7bbabd976ba1 b91b1beb-1730-4e5c-a424-d3fd1acd3eb3 5983216 0 2020-04-06 21:44:15 +0000 UTC map[name:foo time:98022370] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2snwl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2snwl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2snwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:44:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:44:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:44:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:44:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.224,StartTime:2020-04-06 21:44:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:44:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6cbd875e1b8feca4dfc64c6ae4074f55fa575d260a0cee32a97ebffca483c064,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 6 21:44:21.167: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 6 21:44:23.172: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:44:23.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1765" for this suite. • [SLOW TEST:8.145 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":161,"skipped":2719,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:44:23.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-861fceb1-eee9-4a18-b276-002cf3341127 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-861fceb1-eee9-4a18-b276-002cf3341127 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:44:31.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2572" for this suite. • [SLOW TEST:8.148 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2725,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:44:31.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 6 21:44:31.396: INFO: Waiting up to 5m0s for pod "pod-2391aca4-bca4-41a5-a38e-9a63b56e682b" in namespace "emptydir-2566" to be "success or failure" Apr 6 21:44:31.399: INFO: Pod "pod-2391aca4-bca4-41a5-a38e-9a63b56e682b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.408809ms Apr 6 21:44:33.403: INFO: Pod "pod-2391aca4-bca4-41a5-a38e-9a63b56e682b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007414238s Apr 6 21:44:35.407: INFO: Pod "pod-2391aca4-bca4-41a5-a38e-9a63b56e682b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01095261s STEP: Saw pod success Apr 6 21:44:35.407: INFO: Pod "pod-2391aca4-bca4-41a5-a38e-9a63b56e682b" satisfied condition "success or failure" Apr 6 21:44:35.410: INFO: Trying to get logs from node jerma-worker2 pod pod-2391aca4-bca4-41a5-a38e-9a63b56e682b container test-container: STEP: delete the pod Apr 6 21:44:35.426: INFO: Waiting for pod pod-2391aca4-bca4-41a5-a38e-9a63b56e682b to disappear Apr 6 21:44:35.451: INFO: Pod pod-2391aca4-bca4-41a5-a38e-9a63b56e682b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:44:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2566" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2733,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:44:35.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4370 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 6 21:44:35.505: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 6 21:45:03.609: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.227:8080/dial?request=hostname&protocol=http&host=10.244.1.226&port=8080&tries=1'] Namespace:pod-network-test-4370 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:45:03.609: INFO: >>> kubeConfig: /root/.kube/config I0406 21:45:03.630418 6 log.go:172] (0xc001d464d0) (0xc0027315e0) Create stream I0406 21:45:03.630446 6 log.go:172] (0xc001d464d0) (0xc0027315e0) Stream added, broadcasting: 1 I0406 21:45:03.632342 6 log.go:172] (0xc001d464d0) Reply frame received for 1 I0406 21:45:03.632378 6 log.go:172] (0xc001d464d0) (0xc001e90500) Create stream I0406 21:45:03.632390 6 log.go:172] (0xc001d464d0) (0xc001e90500) Stream added, broadcasting: 3 I0406 21:45:03.633291 6 log.go:172] (0xc001d464d0) Reply frame received for 3 I0406 21:45:03.633334 6 log.go:172] (0xc001d464d0) (0xc001e906e0) Create stream I0406 21:45:03.633350 6 log.go:172] (0xc001d464d0) (0xc001e906e0) Stream added, broadcasting: 5 I0406 21:45:03.634261 6 log.go:172] (0xc001d464d0) Reply frame received for 5 I0406 21:45:03.712875 6 log.go:172] (0xc001d464d0) Data frame received for 3 I0406 21:45:03.712898 6 log.go:172] (0xc001e90500) (3) Data frame handling I0406 21:45:03.712926 6 log.go:172] (0xc001e90500) (3) Data frame sent I0406 21:45:03.713929 6 log.go:172] (0xc001d464d0) Data frame received for 3 I0406 21:45:03.713950 6 log.go:172] (0xc001e90500) (3) Data frame handling I0406 21:45:03.714063 6 log.go:172] (0xc001d464d0) Data frame received for 5 I0406 21:45:03.714078 6 log.go:172] (0xc001e906e0) (5) Data frame handling I0406 21:45:03.715550 6 log.go:172] (0xc001d464d0) Data frame received for 1 I0406 21:45:03.715568 6 log.go:172] (0xc0027315e0) (1) Data frame handling I0406 21:45:03.715576 6 log.go:172] (0xc0027315e0) (1) Data frame sent I0406 21:45:03.715635 6 log.go:172] (0xc001d464d0) (0xc0027315e0) Stream removed, broadcasting: 1 I0406 21:45:03.715700 6 log.go:172] (0xc001d464d0) Go away received I0406 21:45:03.715748 6 log.go:172] (0xc001d464d0) (0xc0027315e0) Stream removed, broadcasting: 1 I0406 21:45:03.715771 6 log.go:172] (0xc001d464d0) (0xc001e90500) Stream removed, broadcasting: 3 I0406 21:45:03.715788 6 log.go:172] (0xc001d464d0) (0xc001e906e0) Stream removed, broadcasting: 5 Apr 6 21:45:03.715: INFO: Waiting for responses: map[] Apr 6 21:45:03.724: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.227:8080/dial?request=hostname&protocol=http&host=10.244.2.48&port=8080&tries=1'] Namespace:pod-network-test-4370 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 21:45:03.724: INFO: >>> kubeConfig: /root/.kube/config I0406 21:45:03.766621 6 log.go:172] (0xc0029b8a50) (0xc001e90d20) Create stream I0406 21:45:03.766649 6 log.go:172] (0xc0029b8a50) (0xc001e90d20) Stream added, broadcasting: 1 I0406 21:45:03.768694 6 log.go:172] (0xc0029b8a50) Reply frame received for 1 I0406 21:45:03.768724 6 log.go:172] (0xc0029b8a50) (0xc001e90dc0) Create stream I0406 21:45:03.768736 6 log.go:172] (0xc0029b8a50) (0xc001e90dc0) Stream added, broadcasting: 3 I0406 21:45:03.769801 6 log.go:172] (0xc0029b8a50) Reply frame received for 3 I0406 21:45:03.769834 6 log.go:172] (0xc0029b8a50) (0xc00288eaa0) Create stream I0406 21:45:03.769846 6 log.go:172] (0xc0029b8a50) (0xc00288eaa0) Stream added, broadcasting: 5 I0406 21:45:03.770642 6 log.go:172] (0xc0029b8a50) Reply frame received for 5 I0406 21:45:03.834516 6 log.go:172] (0xc0029b8a50) Data frame received for 3 I0406 21:45:03.834553 6 log.go:172] (0xc001e90dc0) (3) Data frame handling I0406 21:45:03.834576 6 log.go:172] (0xc001e90dc0) (3) Data frame sent I0406 21:45:03.834833 6 log.go:172] (0xc0029b8a50) Data frame received for 5 I0406 21:45:03.834865 6 log.go:172] (0xc00288eaa0) (5) Data frame handling I0406 21:45:03.835120 6 log.go:172] (0xc0029b8a50) Data frame received for 3 I0406 21:45:03.835151 6 log.go:172] (0xc001e90dc0) (3) Data frame handling I0406 21:45:03.836840 6 log.go:172] (0xc0029b8a50) Data frame received for 1 I0406 21:45:03.836861 6 log.go:172] (0xc001e90d20) (1) Data frame handling I0406 21:45:03.836875 6 log.go:172] (0xc001e90d20) (1) Data frame sent I0406 21:45:03.836891 6 log.go:172] (0xc0029b8a50) (0xc001e90d20) Stream removed, broadcasting: 1 I0406 21:45:03.836967 6 log.go:172] (0xc0029b8a50) Go away received I0406 21:45:03.837013 6 log.go:172] (0xc0029b8a50) (0xc001e90d20) Stream removed, broadcasting: 1 I0406 21:45:03.837037 6 log.go:172] (0xc0029b8a50) (0xc001e90dc0) Stream removed, broadcasting: 3 I0406 21:45:03.837051 6 log.go:172] (0xc0029b8a50) (0xc00288eaa0) Stream removed, broadcasting: 5 Apr 6 21:45:03.837: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:03.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4370" for this suite. • [SLOW TEST:28.386 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2752,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:03.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 6 21:45:03.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5242' Apr 6 21:45:04.024: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 6 21:45:04.024: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 6 21:45:06.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5242' Apr 6 21:45:06.166: INFO: stderr: "" Apr 6 21:45:06.166: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:06.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5242" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":165,"skipped":2754,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:06.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:45:10.378: INFO: Waiting up to 5m0s for pod "client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986" in namespace "pods-8721" to be "success or failure" Apr 6 21:45:10.396: INFO: Pod "client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986": Phase="Pending", Reason="", readiness=false. Elapsed: 17.612777ms Apr 6 21:45:12.400: INFO: Pod "client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021432333s Apr 6 21:45:14.404: INFO: Pod "client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025951309s STEP: Saw pod success Apr 6 21:45:14.404: INFO: Pod "client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986" satisfied condition "success or failure" Apr 6 21:45:14.407: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986 container env3cont: STEP: delete the pod Apr 6 21:45:14.427: INFO: Waiting for pod client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986 to disappear Apr 6 21:45:14.430: INFO: Pod client-envvars-e487bb0c-02a0-4a51-b927-a9ffef6ce986 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:14.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8721" for this suite. • [SLOW TEST:8.230 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2772,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:14.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e9787e4f-449e-4a06-be61-ab90134148bf STEP: Creating a pod to test consume configMaps Apr 6 21:45:14.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0" in namespace "configmap-3506" to be "success or failure" Apr 6 21:45:14.527: INFO: Pod "pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.844076ms Apr 6 21:45:16.531: INFO: Pod "pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020579042s Apr 6 21:45:18.535: INFO: Pod "pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025017857s STEP: Saw pod success Apr 6 21:45:18.535: INFO: Pod "pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0" satisfied condition "success or failure" Apr 6 21:45:18.538: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0 container configmap-volume-test: STEP: delete the pod Apr 6 21:45:18.573: INFO: Waiting for pod pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0 to disappear Apr 6 21:45:18.580: INFO: Pod pod-configmaps-8adfef12-2065-4623-b810-a0d55f8c58f0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:18.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3506" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2775,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:18.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:22.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-129" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2796,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:22.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-hnws STEP: Creating a pod to test atomic-volume-subpath Apr 6 21:45:22.745: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hnws" in namespace "subpath-9388" to be "success or failure" Apr 6 21:45:22.749: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801892ms Apr 6 21:45:24.753: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008023215s Apr 6 21:45:26.757: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 4.011699539s Apr 6 21:45:28.761: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 6.015467413s Apr 6 21:45:30.765: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 8.019673123s Apr 6 21:45:32.768: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 10.023010763s Apr 6 21:45:34.773: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 12.027408358s Apr 6 21:45:36.777: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 14.031665692s Apr 6 21:45:38.780: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 16.03514987s Apr 6 21:45:40.806: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 18.060544738s Apr 6 21:45:42.812: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 20.066417154s Apr 6 21:45:44.816: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Running", Reason="", readiness=true. Elapsed: 22.070622578s Apr 6 21:45:46.820: INFO: Pod "pod-subpath-test-configmap-hnws": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074750211s STEP: Saw pod success Apr 6 21:45:46.820: INFO: Pod "pod-subpath-test-configmap-hnws" satisfied condition "success or failure" Apr 6 21:45:46.823: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-hnws container test-container-subpath-configmap-hnws: STEP: delete the pod Apr 6 21:45:46.854: INFO: Waiting for pod pod-subpath-test-configmap-hnws to disappear Apr 6 21:45:46.868: INFO: Pod pod-subpath-test-configmap-hnws no longer exists STEP: Deleting pod pod-subpath-test-configmap-hnws Apr 6 21:45:46.868: INFO: Deleting pod "pod-subpath-test-configmap-hnws" in namespace "subpath-9388" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:46.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9388" for this suite. • [SLOW TEST:24.227 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":169,"skipped":2804,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:46.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 6 21:45:46.972: INFO: Waiting up to 5m0s for pod "downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d" in namespace "downward-api-3434" to be "success or failure" Apr 6 21:45:46.992: INFO: Pod "downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.705765ms Apr 6 21:45:48.995: INFO: Pod "downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022876046s Apr 6 21:45:51.001: INFO: Pod "downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029484921s STEP: Saw pod success Apr 6 21:45:51.002: INFO: Pod "downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d" satisfied condition "success or failure" Apr 6 21:45:51.012: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d container dapi-container: STEP: delete the pod Apr 6 21:45:51.031: INFO: Waiting for pod downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d to disappear Apr 6 21:45:51.075: INFO: Pod downward-api-a69b53e9-ee8a-4d89-bfde-6ef9ed8db74d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:51.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3434" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2823,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:51.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ca586ab4-29eb-41e9-a0f0-7ae2e60d5de9 STEP: Creating a pod to test consume secrets Apr 6 21:45:51.203: INFO: Waiting up to 5m0s for pod "pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1" in namespace "secrets-7998" to be "success or failure" Apr 6 21:45:51.209: INFO: Pod "pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.810449ms Apr 6 21:45:53.212: INFO: Pod "pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009040903s Apr 6 21:45:55.216: INFO: Pod "pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013287792s STEP: Saw pod success Apr 6 21:45:55.216: INFO: Pod "pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1" satisfied condition "success or failure" Apr 6 21:45:55.220: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1 container secret-volume-test: STEP: delete the pod Apr 6 21:45:55.235: INFO: Waiting for pod pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1 to disappear Apr 6 21:45:55.240: INFO: Pod pod-secrets-c859d410-160f-40c8-b16d-ace427edc5a1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:55.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7998" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:55.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:45:59.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4944" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2855,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:45:59.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:46:15.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5690" for this suite. • [SLOW TEST:16.440 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":173,"skipped":2860,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:46:15.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 6 21:46:20.369: INFO: Successfully updated pod "labelsupdateecd46f98-9cf9-4d4d-ab06-fbcdf2219aa6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:46:22.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9195" for this suite. • [SLOW TEST:6.623 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2877,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:46:22.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 6 21:46:22.496: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8161 /api/v1/namespaces/watch-8161/configmaps/e2e-watch-test-watch-closed aa96a8e0-d928-4cc6-b3a3-d99158fa0810 5984035 0 2020-04-06 21:46:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 6 21:46:22.496: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8161 /api/v1/namespaces/watch-8161/configmaps/e2e-watch-test-watch-closed aa96a8e0-d928-4cc6-b3a3-d99158fa0810 5984036 0 2020-04-06 21:46:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 6 21:46:22.508: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8161 /api/v1/namespaces/watch-8161/configmaps/e2e-watch-test-watch-closed aa96a8e0-d928-4cc6-b3a3-d99158fa0810 5984037 0 2020-04-06 21:46:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 6 21:46:22.508: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8161 /api/v1/namespaces/watch-8161/configmaps/e2e-watch-test-watch-closed aa96a8e0-d928-4cc6-b3a3-d99158fa0810 5984038 0 2020-04-06 21:46:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:46:22.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8161" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":175,"skipped":2878,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:46:22.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-e21800b6-20b7-4c8e-8d9c-6138eb0ec13a in namespace container-probe-4841 Apr 6 21:46:26.599: INFO: Started pod busybox-e21800b6-20b7-4c8e-8d9c-6138eb0ec13a in namespace container-probe-4841 STEP: checking the pod's current state and verifying that restartCount is present Apr 6 21:46:26.602: INFO: Initial restart count of pod busybox-e21800b6-20b7-4c8e-8d9c-6138eb0ec13a is 0 Apr 6 21:47:18.793: INFO: Restart count of pod container-probe-4841/busybox-e21800b6-20b7-4c8e-8d9c-6138eb0ec13a is now 1 (52.191595349s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:47:18.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4841" for this suite. • [SLOW TEST:56.369 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2885,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:47:18.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:47:18.969: INFO: Creating deployment "webserver-deployment" Apr 6 21:47:18.974: INFO: Waiting for observed generation 1 Apr 6 21:47:21.119: INFO: Waiting for all required pods to come up Apr 6 21:47:21.124: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 6 21:47:29.133: INFO: Waiting for deployment "webserver-deployment" to complete Apr 6 21:47:29.139: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 6 21:47:29.144: INFO: Updating deployment webserver-deployment Apr 6 21:47:29.144: INFO: Waiting for observed generation 2 Apr 6 21:47:31.284: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 6 21:47:31.286: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 6 21:47:31.289: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 6 21:47:31.295: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 6 21:47:31.295: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 6 21:47:31.298: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 6 21:47:31.303: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 6 21:47:31.303: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 6 21:47:31.308: INFO: Updating deployment webserver-deployment Apr 6 21:47:31.308: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 6 21:47:31.434: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 6 21:47:31.476: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 6 21:47:33.874: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3509 /apis/apps/v1/namespaces/deployment-3509/deployments/webserver-deployment 82927352-c1f8-4189-9da9-8351c583dfcc 5984534 3 2020-04-06 21:47:18 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049fd7a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-06 21:47:31 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-06 21:47:31 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 6 21:47:33.932: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-3509 /apis/apps/v1/namespaces/deployment-3509/replicasets/webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 5984528 3 2020-04-06 21:47:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 82927352-c1f8-4189-9da9-8351c583dfcc 0xc0049fdcb7 0xc0049fdcb8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049fdd28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 6 21:47:33.932: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 6 21:47:33.932: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-3509 /apis/apps/v1/namespaces/deployment-3509/replicasets/webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 5984527 3 2020-04-06 21:47:18 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 82927352-c1f8-4189-9da9-8351c583dfcc 0xc0049fdbe7 0xc0049fdbe8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049fdc48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 6 21:47:34.203: INFO: Pod "webserver-deployment-595b5b9587-2hgk5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2hgk5 webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-2hgk5 bff870b2-aa52-4a9b-9612-2fbea766ea8e 5984389 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aeb3b7 0xc004aeb3b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.236,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c63a688223e6fc1d46fe46174e7ff04369bc724ef203d53fe2feb8f21b2db901,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.203: INFO: Pod "webserver-deployment-595b5b9587-2m82l" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2m82l webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-2m82l f82f8d10-db7b-4503-bb90-7cd066ceb78b 5984325 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aeb537 0xc004aeb538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.232,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d433dd7e0646493c852000d88bf812a5f0032009ea862456356b480e7ed9f9aa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.203: INFO: Pod "webserver-deployment-595b5b9587-4wqs2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4wqs2 webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-4wqs2 23aec4c3-62d4-48cd-9d1b-dddfac9cec93 5984369 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aeb6b7 0xc004aeb6b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.59,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://90af038682c4ed5e82eced4c7f5df335340506476dfb9ab2254041bc706827b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.204: INFO: Pod "webserver-deployment-595b5b9587-5lvrz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5lvrz webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-5lvrz c9e1b535-b860-4447-8e0d-3591e54d6414 5984593 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aeb837 0xc004aeb838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.204: INFO: Pod "webserver-deployment-595b5b9587-6t8pv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6t8pv webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-6t8pv 90a11e57-7124-4142-b1d7-decc93bd2f26 5984551 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aeb997 0xc004aeb998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.204: INFO: Pod "webserver-deployment-595b5b9587-8922v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8922v webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-8922v 86a68423-11f6-4c5a-a7e5-5acd03673f98 5984393 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aebaf7 0xc004aebaf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.233,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c50168f4d5c74f84c24f553539ac677d9b34121d0ab622af57e8a66b6865c94e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.205: INFO: Pod "webserver-deployment-595b5b9587-cxx7w" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cxx7w webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-cxx7w 355e4711-44d2-429d-a83c-6f6f8ed2c45f 5984401 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aebc77 0xc004aebc78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.62,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d1269bc5def90d6742725f9da4e8e8cac8d5bc44f5a71f92c3d63573f8b2aa7b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.205: INFO: Pod "webserver-deployment-595b5b9587-fk2hg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fk2hg webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-fk2hg bebf56f0-8c36-4340-a1ad-4f1eb5943dc5 5984530 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aebdf7 0xc004aebdf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.205: INFO: Pod "webserver-deployment-595b5b9587-gckcc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gckcc webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-gckcc 713a7993-4a0d-4492-a452-a73926fc0dd3 5984525 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004aebf57 0xc004aebf58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.206: INFO: Pod "webserver-deployment-595b5b9587-gfk5m" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gfk5m webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-gfk5m 787bb5bf-a633-4adb-a2b7-301da97fd9cf 5984386 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c040b7 0xc004c040b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.235,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc0a884cd8d7dda9f59612b76696f71a06ed884cf81f694f7d808e889ead5232,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.206: INFO: Pod "webserver-deployment-595b5b9587-knwq8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-knwq8 webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-knwq8 98fdfab0-c982-4e8c-b193-0168e377858d 5984576 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04237 0xc004c04238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.206: INFO: Pod "webserver-deployment-595b5b9587-m2xbx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m2xbx webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-m2xbx 0228adf3-8640-4304-82fa-8a5fe1152465 5984578 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04397 0xc004c04398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.206: INFO: Pod "webserver-deployment-595b5b9587-nlvg6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nlvg6 webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-nlvg6 627a239d-70ca-4e93-8db7-a1de777d8ac3 5984537 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c044f7 0xc004c044f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.206: INFO: Pod "webserver-deployment-595b5b9587-nv642" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nv642 webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-nv642 b954f5ad-62f8-4f55-a4a4-82ab8edb7f90 5984558 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04657 0xc004c04658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.207: INFO: Pod "webserver-deployment-595b5b9587-p8dsb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p8dsb webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-p8dsb f6d99518-e833-47fc-9245-6577eae91cf7 5984541 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c047c7 0xc004c047c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.207: INFO: Pod "webserver-deployment-595b5b9587-s4dkc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s4dkc webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-s4dkc d31499fc-21d9-4d98-b9f7-a829a4271448 5984372 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04927 0xc004c04928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.60,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://27a97ad383af2d96719107c1e44141b660b3722f10bb54978dfc71f2a1e2ecde,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.207: INFO: Pod "webserver-deployment-595b5b9587-sjn22" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sjn22 webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-sjn22 8741a346-b14e-4fea-b4b7-59b9c3dc1018 5984553 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04aa7 0xc004c04aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.207: INFO: Pod "webserver-deployment-595b5b9587-t49w4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t49w4 webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-t49w4 002ec541-caf6-4b4b-af12-a0f203c29b34 5984609 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04c07 0xc004c04c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.208: INFO: Pod "webserver-deployment-595b5b9587-thfcn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-thfcn webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-thfcn ed1997b7-8c4a-47ed-828d-105fe082c206 5984568 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04d67 0xc004c04d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.208: INFO: Pod "webserver-deployment-595b5b9587-xhq9m" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xhq9m webserver-deployment-595b5b9587- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-595b5b9587-xhq9m 0d43d885-5c80-4e56-9884-6c053cb3da98 5984348 0 2020-04-06 21:47:19 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 929a5b62-ae9d-4de6-b493-f90f765bb670 0xc004c04ec7 0xc004c04ec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.58,StartTime:2020-04-06 21:47:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:47:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d6a143b5baa03dee6cf36d95a97fef79254565299320066552da4b3ea93fb30f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.208: INFO: Pod "webserver-deployment-c7997dcc8-5qbjq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5qbjq webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-5qbjq caae5324-15f9-4207-ab6c-7870606382be 5984557 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c05047 0xc004c05048}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.208: INFO: Pod "webserver-deployment-c7997dcc8-62t9t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-62t9t webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-62t9t de0a2a47-217e-4f16-abbc-09c7ba909f1d 5984602 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c051c7 0xc004c051c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.208: INFO: Pod "webserver-deployment-c7997dcc8-7xzkd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7xzkd webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-7xzkd 7f71a057-d3be-45c8-9883-387cb087b87c 5984457 0 2020-04-06 21:47:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c05347 0xc004c05348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.209: INFO: Pod "webserver-deployment-c7997dcc8-8b4nr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8b4nr webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-8b4nr fb1ddef3-6619-44e3-8b7b-c684b590f1cc 5984612 0 2020-04-06 21:47:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c054c7 0xc004c054c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.237,StartTime:2020-04-06 21:47:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.209: INFO: Pod "webserver-deployment-c7997dcc8-d9b8d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d9b8d webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-d9b8d a76a69f5-9006-4dec-90ac-d7f8e64fbb0f 5984575 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c05677 0xc004c05678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.209: INFO: Pod "webserver-deployment-c7997dcc8-gfjk9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gfjk9 webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-gfjk9 cd9390e6-a81c-451f-bf26-c95c8842fb65 5984606 0 2020-04-06 21:47:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c057f7 0xc004c057f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.238,StartTime:2020-04-06 21:47:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.209: INFO: Pod "webserver-deployment-c7997dcc8-hc5wh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hc5wh webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-hc5wh 237d4482-08d5-4c68-abda-6d76541ea1dc 5984603 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c059a7 0xc004c059a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.209: INFO: Pod "webserver-deployment-c7997dcc8-jvnv2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jvnv2 webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-jvnv2 f01501a4-aaf4-41dc-9102-ae3030e007ca 5984458 0 2020-04-06 21:47:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c05b27 0xc004c05b28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.209: INFO: Pod "webserver-deployment-c7997dcc8-ln5bs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ln5bs webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-ln5bs 90965582-eaee-4697-98da-f639b56736b1 5984567 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c05ca7 0xc004c05ca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.210: INFO: Pod "webserver-deployment-c7997dcc8-mn8p8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mn8p8 webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-mn8p8 985df558-9a16-467c-8db8-484d4a756baa 5984591 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c05e27 0xc004c05e28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.210: INFO: Pod "webserver-deployment-c7997dcc8-ntxsl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ntxsl webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-ntxsl ceb72f19-ba9e-487e-8a12-92f716934b98 5984605 0 2020-04-06 21:47:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc004c05fa7 0xc004c05fa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.63,StartTime:2020-04-06 21:47:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.210: INFO: Pod "webserver-deployment-c7997dcc8-s58tm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s58tm webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-s58tm 1b3e64a3-2e05-4415-a477-f0264b4618b6 5984526 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc0051fc167 0xc0051fc168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 21:47:34.210: INFO: Pod "webserver-deployment-c7997dcc8-xm7wb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xm7wb webserver-deployment-c7997dcc8- deployment-3509 /api/v1/namespaces/deployment-3509/pods/webserver-deployment-c7997dcc8-xm7wb c034cfe7-e4a6-4db1-bafd-c6aa06c7be81 5984543 0 2020-04-06 21:47:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 02274a0f-995f-4748-aad2-16368c361148 0xc0051fc297 0xc0051fc298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7m528,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7m528,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7m528,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:47:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-06 21:47:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:47:34.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3509" for this suite. • [SLOW TEST:15.543 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":177,"skipped":2902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:47:34.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 6 21:47:35.244: INFO: Waiting up to 5m0s for pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a" in namespace "var-expansion-1883" to be "success or failure" Apr 6 21:47:35.273: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.81376ms Apr 6 21:47:37.382: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138086291s Apr 6 21:47:39.442: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197854153s Apr 6 21:47:41.446: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202137565s Apr 6 21:47:43.466: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222341964s Apr 6 21:47:45.629: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.384960327s Apr 6 21:47:47.739: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.49482116s STEP: Saw pod success Apr 6 21:47:47.739: INFO: Pod "var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a" satisfied condition "success or failure" Apr 6 21:47:47.940: INFO: Trying to get logs from node jerma-worker pod var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a container dapi-container: STEP: delete the pod Apr 6 21:47:48.539: INFO: Waiting for pod var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a to disappear Apr 6 21:47:48.685: INFO: Pod var-expansion-b39d1afd-d216-40d5-ad75-cbb095dec25a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:47:48.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1883" for this suite. • [SLOW TEST:14.313 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2949,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:47:48.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:47:49.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9" in namespace "projected-9699" to be "success or failure" Apr 6 21:47:49.288: INFO: Pod "downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9": Phase="Pending", Reason="", readiness=false. Elapsed: 132.033883ms Apr 6 21:47:51.291: INFO: Pod "downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134784794s Apr 6 21:47:53.294: INFO: Pod "downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.138368797s Apr 6 21:47:55.298: INFO: Pod "downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9": Phase="Running", Reason="", readiness=true. Elapsed: 6.142148142s Apr 6 21:47:57.301: INFO: Pod "downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.145417656s STEP: Saw pod success Apr 6 21:47:57.301: INFO: Pod "downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9" satisfied condition "success or failure" Apr 6 21:47:57.304: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9 container client-container: STEP: delete the pod Apr 6 21:47:57.322: INFO: Waiting for pod downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9 to disappear Apr 6 21:47:57.327: INFO: Pod downwardapi-volume-4b08f571-6446-430d-b51d-0169eb3d59c9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:47:57.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9699" for this suite. • [SLOW TEST:8.609 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2956,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:47:57.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:47:58.190: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:48:00.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806478, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806478, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806478, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806478, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:48:03.230: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:03.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8024" for this suite. STEP: Destroying namespace "webhook-8024-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.075 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":180,"skipped":2959,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:03.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 21:48:04.621: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 21:48:06.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806484, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806484, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806484, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721806484, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 21:48:09.672: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:09.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2715" for this suite. STEP: Destroying namespace "webhook-2715-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.531 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":181,"skipped":2960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:09.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f941e0e5-9cce-4003-9a56-4ecb52fd625f STEP: Creating a pod to test consume secrets Apr 6 21:48:10.037: INFO: Waiting up to 5m0s for pod "pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a" in namespace "secrets-4412" to be "success or failure" Apr 6 21:48:10.041: INFO: Pod "pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.580869ms Apr 6 21:48:12.044: INFO: Pod "pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007093838s Apr 6 21:48:14.049: INFO: Pod "pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011813383s STEP: Saw pod success Apr 6 21:48:14.049: INFO: Pod "pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a" satisfied condition "success or failure" Apr 6 21:48:14.052: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a container secret-volume-test: STEP: delete the pod Apr 6 21:48:14.072: INFO: Waiting for pod pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a to disappear Apr 6 21:48:14.077: INFO: Pod pod-secrets-bdcdcbbd-b966-4177-a596-2baba892b79a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:14.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4412" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2983,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:14.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:48:14.171: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:15.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3448" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":183,"skipped":2987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:15.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 6 21:48:15.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2982' Apr 6 21:48:17.957: INFO: stderr: "" Apr 6 21:48:17.957: INFO: stdout: "pod/pause created\n" Apr 6 21:48:17.957: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 6 21:48:17.957: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2982" to be "running and ready" Apr 6 21:48:18.006: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 48.681923ms Apr 6 21:48:20.009: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052472738s Apr 6 21:48:22.013: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.056577978s Apr 6 21:48:22.014: INFO: Pod "pause" satisfied condition "running and ready" Apr 6 21:48:22.014: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 6 21:48:22.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2982' Apr 6 21:48:22.110: INFO: stderr: "" Apr 6 21:48:22.110: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 6 21:48:22.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2982' Apr 6 21:48:22.203: INFO: stderr: "" Apr 6 21:48:22.203: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 6 21:48:22.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2982' Apr 6 21:48:22.318: INFO: stderr: "" Apr 6 21:48:22.318: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 6 21:48:22.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2982' Apr 6 21:48:22.407: INFO: stderr: "" Apr 6 21:48:22.407: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 6 21:48:22.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2982' Apr 6 21:48:22.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 21:48:22.580: INFO: stdout: "pod \"pause\" force deleted\n" Apr 6 21:48:22.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2982' Apr 6 21:48:22.690: INFO: stderr: "No resources found in kubectl-2982 namespace.\n" Apr 6 21:48:22.690: INFO: stdout: "" Apr 6 21:48:22.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2982 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 6 21:48:22.776: INFO: stderr: "" Apr 6 21:48:22.776: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:22.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2982" for this suite. • [SLOW TEST:7.564 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":184,"skipped":3012,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:22.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 6 21:48:22.985: INFO: Waiting up to 5m0s for pod "pod-690b4694-fb89-4784-a4cf-019051e47efe" in namespace "emptydir-6160" to be "success or failure" Apr 6 21:48:22.989: INFO: Pod "pod-690b4694-fb89-4784-a4cf-019051e47efe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.903615ms Apr 6 21:48:24.993: INFO: Pod "pod-690b4694-fb89-4784-a4cf-019051e47efe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00730967s Apr 6 21:48:26.997: INFO: Pod "pod-690b4694-fb89-4784-a4cf-019051e47efe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011461942s STEP: Saw pod success Apr 6 21:48:26.997: INFO: Pod "pod-690b4694-fb89-4784-a4cf-019051e47efe" satisfied condition "success or failure" Apr 6 21:48:27.000: INFO: Trying to get logs from node jerma-worker pod pod-690b4694-fb89-4784-a4cf-019051e47efe container test-container: STEP: delete the pod Apr 6 21:48:27.055: INFO: Waiting for pod pod-690b4694-fb89-4784-a4cf-019051e47efe to disappear Apr 6 21:48:27.065: INFO: Pod pod-690b4694-fb89-4784-a4cf-019051e47efe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6160" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3020,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:27.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8dd7bc64-260c-4cbf-870b-22ec08eb4426 STEP: Creating a pod to test consume configMaps Apr 6 21:48:27.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9" in namespace "projected-6448" to be "success or failure" Apr 6 21:48:27.203: INFO: Pod "pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9": Phase="Pending", Reason="", readiness=false. Elapsed: 56.123846ms Apr 6 21:48:29.207: INFO: Pod "pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059614804s Apr 6 21:48:31.210: INFO: Pod "pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062833797s STEP: Saw pod success Apr 6 21:48:31.210: INFO: Pod "pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9" satisfied condition "success or failure" Apr 6 21:48:31.212: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9 container projected-configmap-volume-test: STEP: delete the pod Apr 6 21:48:31.278: INFO: Waiting for pod pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9 to disappear Apr 6 21:48:31.299: INFO: Pod pod-projected-configmaps-8f4ed2cc-6971-4512-89cc-66186cd384f9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:31.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6448" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3021,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:31.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:38.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1512" for this suite. • [SLOW TEST:7.119 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":187,"skipped":3023,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:38.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 6 21:48:42.512: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 6 21:48:47.619: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:47.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6420" for this suite. • [SLOW TEST:9.205 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":188,"skipped":3034,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:47.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0406 21:48:59.315470 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 6 21:48:59.315: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:48:59.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7835" for this suite. • [SLOW TEST:11.689 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":189,"skipped":3050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:48:59.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:48:59.627: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:03.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-283" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:03.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:03.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3932" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":191,"skipped":3103,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:03.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:49:03.914: INFO: Creating ReplicaSet my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0 Apr 6 21:49:03.928: INFO: Pod name my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0: Found 0 pods out of 1 Apr 6 21:49:08.931: INFO: Pod name my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0: Found 1 pods out of 1 Apr 6 21:49:08.931: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0" is running Apr 6 21:49:08.933: INFO: Pod "my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0-f8656" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 21:49:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 21:49:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 21:49:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 21:49:03 +0000 UTC Reason: Message:}]) Apr 6 21:49:08.934: INFO: Trying to dial the pod Apr 6 21:49:13.947: INFO: Controller my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0: Got expected result from replica 1 [my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0-f8656]: "my-hostname-basic-5da55b50-8911-49b3-85c7-c166a1a68bc0-f8656", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:13.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6003" for this suite. • [SLOW TEST:10.083 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":192,"skipped":3116,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:13.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 6 21:49:13.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8220' Apr 6 21:49:14.105: INFO: stderr: "" Apr 6 21:49:14.105: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 6 21:49:19.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8220 -o json' Apr 6 21:49:19.248: INFO: stderr: "" Apr 6 21:49:19.249: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-06T21:49:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8220\",\n \"resourceVersion\": \"5985742\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8220/pods/e2e-test-httpd-pod\",\n \"uid\": \"a9c1d47e-07b5-47bc-84c9-de5a95000e78\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bv8tf\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bv8tf\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bv8tf\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-06T21:49:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-06T21:49:16Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-06T21:49:16Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-06T21:49:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://7a302741d55742fab884ff9fccd71bdf4091a70d21eb3807ba0a46f600e44916\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-06T21:49:16Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.9\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.9\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-06T21:49:14Z\"\n }\n}\n" STEP: replace the image in the pod Apr 6 21:49:19.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8220' Apr 6 21:49:19.490: INFO: stderr: "" Apr 6 21:49:19.490: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 6 21:49:19.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8220' Apr 6 21:49:29.251: INFO: stderr: "" Apr 6 21:49:29.251: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:29.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8220" for this suite. • [SLOW TEST:15.304 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":193,"skipped":3125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:29.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 6 21:49:29.331: INFO: namespace kubectl-7363 Apr 6 21:49:29.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7363' Apr 6 21:49:29.572: INFO: stderr: "" Apr 6 21:49:29.572: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 6 21:49:30.577: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:49:30.577: INFO: Found 0 / 1 Apr 6 21:49:31.671: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:49:31.671: INFO: Found 0 / 1 Apr 6 21:49:32.577: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:49:32.577: INFO: Found 0 / 1 Apr 6 21:49:33.611: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:49:33.611: INFO: Found 1 / 1 Apr 6 21:49:33.611: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 6 21:49:33.614: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 21:49:33.614: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 6 21:49:33.614: INFO: wait on agnhost-master startup in kubectl-7363 Apr 6 21:49:33.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-pb6xn agnhost-master --namespace=kubectl-7363' Apr 6 21:49:33.729: INFO: stderr: "" Apr 6 21:49:33.729: INFO: stdout: "Paused\n" STEP: exposing RC Apr 6 21:49:33.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7363' Apr 6 21:49:33.856: INFO: stderr: "" Apr 6 21:49:33.856: INFO: stdout: "service/rm2 exposed\n" Apr 6 21:49:33.870: INFO: Service rm2 in namespace kubectl-7363 found. STEP: exposing service Apr 6 21:49:35.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7363' Apr 6 21:49:36.050: INFO: stderr: "" Apr 6 21:49:36.050: INFO: stdout: "service/rm3 exposed\n" Apr 6 21:49:36.053: INFO: Service rm3 in namespace kubectl-7363 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:38.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7363" for this suite. • [SLOW TEST:8.811 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":194,"skipped":3148,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:38.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 6 21:49:43.217: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:43.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-769" for this suite. • [SLOW TEST:5.238 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":195,"skipped":3169,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:43.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:43.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7563" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":196,"skipped":3172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:43.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2ccaa294-6a92-476d-b7d3-8688bb443452 STEP: Creating a pod to test consume secrets Apr 6 21:49:43.824: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c" in namespace "projected-8568" to be "success or failure" Apr 6 21:49:43.827: INFO: Pod "pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.637101ms Apr 6 21:49:45.832: INFO: Pod "pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007912105s Apr 6 21:49:47.836: INFO: Pod "pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012076456s STEP: Saw pod success Apr 6 21:49:47.836: INFO: Pod "pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c" satisfied condition "success or failure" Apr 6 21:49:47.839: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c container projected-secret-volume-test: STEP: delete the pod Apr 6 21:49:47.876: INFO: Waiting for pod pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c to disappear Apr 6 21:49:47.928: INFO: Pod pod-projected-secrets-654251c8-bee6-4d2e-8007-dec3bf4f7f3c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:49:47.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8568" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:49:47.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 6 21:49:47.978: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 6 21:49:47.989: INFO: Waiting for terminating namespaces to be deleted... Apr 6 21:49:47.992: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 6 21:49:47.996: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:49:47.996: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:49:47.996: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:49:47.996: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:49:47.996: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 6 21:49:48.018: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:49:48.018: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:49:48.018: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 6 21:49:48.018: INFO: Container kube-bench ready: false, restart count 0 Apr 6 21:49:48.018: INFO: pod-adoption-release from replicaset-769 started at 2020-04-06 21:49:38 +0000 UTC (1 container statuses recorded) Apr 6 21:49:48.018: INFO: Container pod-adoption-release ready: true, restart count 0 Apr 6 21:49:48.018: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:49:48.018: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:49:48.018: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 6 21:49:48.018: INFO: Container kube-hunter ready: false, restart count 0 Apr 6 21:49:48.018: INFO: pod-adoption-release-8lrq6 from replicaset-769 started at 2020-04-06 21:49:43 +0000 UTC (1 container statuses recorded) Apr 6 21:49:48.018: INFO: Container pod-adoption-release ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-09159378-911d-49bd-8fe8-6b1f798b685f 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-09159378-911d-49bd-8fe8-6b1f798b685f off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-09159378-911d-49bd-8fe8-6b1f798b685f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:54:56.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7365" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.227 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":198,"skipped":3242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:54:56.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:54:56.257: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 6 21:54:56.281: INFO: Number of nodes with available pods: 0 Apr 6 21:54:56.281: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 6 21:54:56.364: INFO: Number of nodes with available pods: 0 Apr 6 21:54:56.364: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:54:57.403: INFO: Number of nodes with available pods: 0 Apr 6 21:54:57.403: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:54:58.368: INFO: Number of nodes with available pods: 0 Apr 6 21:54:58.368: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:54:59.368: INFO: Number of nodes with available pods: 0 Apr 6 21:54:59.368: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:00.368: INFO: Number of nodes with available pods: 1 Apr 6 21:55:00.368: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 6 21:55:00.400: INFO: Number of nodes with available pods: 1 Apr 6 21:55:00.400: INFO: Number of running nodes: 0, number of available pods: 1 Apr 6 21:55:01.442: INFO: Number of nodes with available pods: 0 Apr 6 21:55:01.442: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 6 21:55:01.730: INFO: Number of nodes with available pods: 0 Apr 6 21:55:01.730: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:02.733: INFO: Number of nodes with available pods: 0 Apr 6 21:55:02.733: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:03.734: INFO: Number of nodes with available pods: 0 Apr 6 21:55:03.734: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:04.735: INFO: Number of nodes with available pods: 0 Apr 6 21:55:04.735: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:05.736: INFO: Number of nodes with available pods: 0 Apr 6 21:55:05.736: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:06.737: INFO: Number of nodes with available pods: 0 Apr 6 21:55:06.737: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:07.734: INFO: Number of nodes with available pods: 0 Apr 6 21:55:07.734: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:08.734: INFO: Number of nodes with available pods: 0 Apr 6 21:55:08.735: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:09.734: INFO: Number of nodes with available pods: 0 Apr 6 21:55:09.734: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:10.760: INFO: Number of nodes with available pods: 0 Apr 6 21:55:10.760: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:11.748: INFO: Number of nodes with available pods: 0 Apr 6 21:55:11.748: INFO: Node jerma-worker is running more than one daemon pod Apr 6 21:55:12.734: INFO: Number of nodes with available pods: 1 Apr 6 21:55:12.735: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7755, will wait for the garbage collector to delete the pods Apr 6 21:55:12.800: INFO: Deleting DaemonSet.extensions daemon-set took: 6.811003ms Apr 6 21:55:13.100: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.230965ms Apr 6 21:55:19.304: INFO: Number of nodes with available pods: 0 Apr 6 21:55:19.304: INFO: Number of running nodes: 0, number of available pods: 0 Apr 6 21:55:19.328: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7755/daemonsets","resourceVersion":"5987069"},"items":null} Apr 6 21:55:19.330: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7755/pods","resourceVersion":"5987069"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:55:19.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7755" for this suite. • [SLOW TEST:23.201 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":199,"skipped":3310,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:55:19.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:55:19.403: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 6 21:55:22.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2530 create -f -' Apr 6 21:55:25.170: INFO: stderr: "" Apr 6 21:55:25.170: INFO: stdout: "e2e-test-crd-publish-openapi-5320-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 6 21:55:25.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2530 delete e2e-test-crd-publish-openapi-5320-crds test-cr' Apr 6 21:55:25.261: INFO: stderr: "" Apr 6 21:55:25.261: INFO: stdout: "e2e-test-crd-publish-openapi-5320-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 6 21:55:25.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2530 apply -f -' Apr 6 21:55:25.526: INFO: stderr: "" Apr 6 21:55:25.526: INFO: stdout: "e2e-test-crd-publish-openapi-5320-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 6 21:55:25.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2530 delete e2e-test-crd-publish-openapi-5320-crds test-cr' Apr 6 21:55:25.613: INFO: stderr: "" Apr 6 21:55:25.613: INFO: stdout: "e2e-test-crd-publish-openapi-5320-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 6 21:55:25.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5320-crds' Apr 6 21:55:25.827: INFO: stderr: "" Apr 6 21:55:25.827: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5320-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:55:28.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2530" for this suite. • [SLOW TEST:9.355 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":200,"skipped":3314,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:55:28.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-68eaffec-cef6-4c0c-90f6-78d46c3ed120 STEP: Creating secret with name s-test-opt-upd-e9a732a0-ac24-4745-85fb-01b044450f64 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-68eaffec-cef6-4c0c-90f6-78d46c3ed120 STEP: Updating secret s-test-opt-upd-e9a732a0-ac24-4745-85fb-01b044450f64 STEP: Creating secret with name s-test-opt-create-bbf43003-822b-42f9-83c7-6dab5e20844b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:56:39.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7465" for this suite. • [SLOW TEST:70.537 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3331,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:56:39.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 6 21:56:39.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6342 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 6 21:56:42.743: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0406 21:56:42.635708 2698 log.go:172] (0xc000a1c0b0) (0xc000742140) Create stream\nI0406 21:56:42.635785 2698 log.go:172] (0xc000a1c0b0) (0xc000742140) Stream added, broadcasting: 1\nI0406 21:56:42.639327 2698 log.go:172] (0xc000a1c0b0) Reply frame received for 1\nI0406 21:56:42.639382 2698 log.go:172] (0xc000a1c0b0) (0xc0007f4000) Create stream\nI0406 21:56:42.639403 2698 log.go:172] (0xc000a1c0b0) (0xc0007f4000) Stream added, broadcasting: 3\nI0406 21:56:42.640444 2698 log.go:172] (0xc000a1c0b0) Reply frame received for 3\nI0406 21:56:42.640496 2698 log.go:172] (0xc000a1c0b0) (0xc0007421e0) Create stream\nI0406 21:56:42.640507 2698 log.go:172] (0xc000a1c0b0) (0xc0007421e0) Stream added, broadcasting: 5\nI0406 21:56:42.641818 2698 log.go:172] (0xc000a1c0b0) Reply frame received for 5\nI0406 21:56:42.641855 2698 log.go:172] (0xc000a1c0b0) (0xc00064fae0) Create stream\nI0406 21:56:42.641867 2698 log.go:172] (0xc000a1c0b0) (0xc00064fae0) Stream added, broadcasting: 7\nI0406 21:56:42.642902 2698 log.go:172] (0xc000a1c0b0) Reply frame received for 7\nI0406 21:56:42.643064 2698 log.go:172] (0xc0007f4000) (3) Writing data frame\nI0406 21:56:42.643191 2698 log.go:172] (0xc0007f4000) (3) Writing data frame\nI0406 21:56:42.644227 2698 log.go:172] (0xc000a1c0b0) Data frame received for 5\nI0406 21:56:42.644256 2698 log.go:172] (0xc0007421e0) (5) Data frame handling\nI0406 21:56:42.644273 2698 log.go:172] (0xc0007421e0) (5) Data frame sent\nI0406 21:56:42.645102 2698 log.go:172] (0xc000a1c0b0) Data frame received for 5\nI0406 21:56:42.645244 2698 log.go:172] (0xc0007421e0) (5) Data frame handling\nI0406 21:56:42.645271 2698 log.go:172] (0xc0007421e0) (5) Data frame sent\nI0406 21:56:42.691450 2698 log.go:172] (0xc000a1c0b0) Data frame received for 7\nI0406 21:56:42.691489 2698 log.go:172] (0xc00064fae0) (7) Data frame handling\nI0406 21:56:42.691512 2698 log.go:172] (0xc000a1c0b0) Data frame received for 5\nI0406 21:56:42.691521 2698 log.go:172] (0xc0007421e0) (5) Data frame handling\nI0406 21:56:42.691869 2698 log.go:172] (0xc000a1c0b0) Data frame received for 1\nI0406 21:56:42.691910 2698 log.go:172] (0xc000742140) (1) Data frame handling\nI0406 21:56:42.691944 2698 log.go:172] (0xc000742140) (1) Data frame sent\nI0406 21:56:42.692126 2698 log.go:172] (0xc000a1c0b0) (0xc000742140) Stream removed, broadcasting: 1\nI0406 21:56:42.692461 2698 log.go:172] (0xc000a1c0b0) (0xc000742140) Stream removed, broadcasting: 1\nI0406 21:56:42.692535 2698 log.go:172] (0xc000a1c0b0) (0xc0007f4000) Stream removed, broadcasting: 3\nI0406 21:56:42.692708 2698 log.go:172] (0xc000a1c0b0) Go away received\nI0406 21:56:42.692815 2698 log.go:172] (0xc000a1c0b0) (0xc0007421e0) Stream removed, broadcasting: 5\nI0406 21:56:42.692864 2698 log.go:172] (0xc000a1c0b0) (0xc00064fae0) Stream removed, broadcasting: 7\n" Apr 6 21:56:42.743: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:56:44.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6342" for this suite. • [SLOW TEST:5.527 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":202,"skipped":3336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:56:44.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 6 21:56:44.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3047' Apr 6 21:56:45.395: INFO: stderr: "" Apr 6 21:56:45.395: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 6 21:56:45.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3047' Apr 6 21:56:45.518: INFO: stderr: "" Apr 6 21:56:45.518: INFO: stdout: "update-demo-nautilus-4hx2l update-demo-nautilus-5f2qz " Apr 6 21:56:45.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hx2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3047' Apr 6 21:56:45.654: INFO: stderr: "" Apr 6 21:56:45.654: INFO: stdout: "" Apr 6 21:56:45.654: INFO: update-demo-nautilus-4hx2l is created but not running Apr 6 21:56:50.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3047' Apr 6 21:56:50.753: INFO: stderr: "" Apr 6 21:56:50.753: INFO: stdout: "update-demo-nautilus-4hx2l update-demo-nautilus-5f2qz " Apr 6 21:56:50.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hx2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3047' Apr 6 21:56:50.847: INFO: stderr: "" Apr 6 21:56:50.847: INFO: stdout: "true" Apr 6 21:56:50.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hx2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3047' Apr 6 21:56:50.940: INFO: stderr: "" Apr 6 21:56:50.940: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:56:50.940: INFO: validating pod update-demo-nautilus-4hx2l Apr 6 21:56:50.944: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:56:50.944: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:56:50.944: INFO: update-demo-nautilus-4hx2l is verified up and running Apr 6 21:56:50.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5f2qz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3047' Apr 6 21:56:51.034: INFO: stderr: "" Apr 6 21:56:51.034: INFO: stdout: "true" Apr 6 21:56:51.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5f2qz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3047' Apr 6 21:56:51.129: INFO: stderr: "" Apr 6 21:56:51.129: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 6 21:56:51.129: INFO: validating pod update-demo-nautilus-5f2qz Apr 6 21:56:51.133: INFO: got data: { "image": "nautilus.jpg" } Apr 6 21:56:51.133: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 6 21:56:51.133: INFO: update-demo-nautilus-5f2qz is verified up and running STEP: using delete to clean up resources Apr 6 21:56:51.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3047' Apr 6 21:56:51.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 21:56:51.252: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 6 21:56:51.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3047' Apr 6 21:56:51.339: INFO: stderr: "No resources found in kubectl-3047 namespace.\n" Apr 6 21:56:51.339: INFO: stdout: "" Apr 6 21:56:51.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3047 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 6 21:56:51.537: INFO: stderr: "" Apr 6 21:56:51.537: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:56:51.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3047" for this suite. • [SLOW TEST:6.758 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":203,"skipped":3360,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:56:51.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 6 21:56:51.714: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987493 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 6 21:56:51.714: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987493 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 6 21:57:01.722: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987547 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 6 21:57:01.722: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987547 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 6 21:57:11.730: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987577 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 6 21:57:11.730: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987577 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 6 21:57:21.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987607 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 6 21:57:21.740: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-a 9ae4324b-b4c5-471a-8f16-38381008b2dd 5987607 0 2020-04-06 21:56:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 6 21:57:31.754: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-b a828676d-1719-4cec-83ed-72447df0ab3b 5987637 0 2020-04-06 21:57:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 6 21:57:31.754: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-b a828676d-1719-4cec-83ed-72447df0ab3b 5987637 0 2020-04-06 21:57:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 6 21:57:41.761: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-b a828676d-1719-4cec-83ed-72447df0ab3b 5987665 0 2020-04-06 21:57:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 6 21:57:41.762: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6493 /api/v1/namespaces/watch-6493/configmaps/e2e-watch-test-configmap-b a828676d-1719-4cec-83ed-72447df0ab3b 5987665 0 2020-04-06 21:57:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:57:51.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6493" for this suite. • [SLOW TEST:60.227 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":204,"skipped":3362,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:57:51.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 6 21:57:51.819: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 6 21:57:51.828: INFO: Waiting for terminating namespaces to be deleted... Apr 6 21:57:51.830: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 6 21:57:51.848: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:57:51.848: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:57:51.848: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:57:51.848: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:57:51.848: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 6 21:57:51.891: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:57:51.891: INFO: Container kube-proxy ready: true, restart count 0 Apr 6 21:57:51.891: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 6 21:57:51.891: INFO: Container kube-hunter ready: false, restart count 0 Apr 6 21:57:51.891: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 6 21:57:51.891: INFO: Container kindnet-cni ready: true, restart count 0 Apr 6 21:57:51.891: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 6 21:57:51.891: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9e24952b-11cd-463b-a137-e9191f10a2ec 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-9e24952b-11cd-463b-a137-e9191f10a2ec off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-9e24952b-11cd-463b-a137-e9191f10a2ec [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:58:08.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4095" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.320 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":205,"skipped":3363,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:58:08.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 6 21:58:08.142: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 6 21:58:08.615: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 6 21:58:10.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807088, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807088, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807088, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807088, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 21:58:13.401: INFO: Waited 642.069277ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:58:14.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-167" for this suite. • [SLOW TEST:6.507 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":206,"skipped":3369,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:58:14.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-4c38bfdd-bf78-45b6-9888-85d71afc0a01 STEP: Creating a pod to test consume secrets Apr 6 21:58:14.869: INFO: Waiting up to 5m0s for pod "pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053" in namespace "secrets-8726" to be "success or failure" Apr 6 21:58:14.896: INFO: Pod "pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053": Phase="Pending", Reason="", readiness=false. Elapsed: 27.092723ms Apr 6 21:58:16.899: INFO: Pod "pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030124927s Apr 6 21:58:18.903: INFO: Pod "pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034039666s STEP: Saw pod success Apr 6 21:58:18.903: INFO: Pod "pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053" satisfied condition "success or failure" Apr 6 21:58:18.906: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053 container secret-volume-test: STEP: delete the pod Apr 6 21:58:18.968: INFO: Waiting for pod pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053 to disappear Apr 6 21:58:18.974: INFO: Pod pod-secrets-9258fdbe-39f0-467f-8a7c-4a5128566053 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:58:18.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8726" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:58:18.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 6 21:58:22.110: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:58:22.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8825" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3443,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:58:22.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5091.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5091.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5091.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5091.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5091.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5091.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 21:58:28.333: INFO: DNS probes using dns-5091/dns-test-0c27f4f2-c365-4f58-aa39-2e502de3de49 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:58:28.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5091" for this suite. • [SLOW TEST:6.287 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":209,"skipped":3458,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:58:28.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 21:58:28.844: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 6 21:58:33.853: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 6 21:58:33.853: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 6 21:58:35.857: INFO: Creating deployment "test-rollover-deployment" Apr 6 21:58:35.867: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 6 21:58:37.872: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 6 21:58:37.876: INFO: Ensure that both replica sets have 1 created replica Apr 6 21:58:37.881: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 6 21:58:37.886: INFO: Updating deployment test-rollover-deployment Apr 6 21:58:37.886: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 6 21:58:39.922: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 6 21:58:39.928: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 6 21:58:39.933: INFO: all replica sets need to contain the pod-template-hash label Apr 6 21:58:39.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807118, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 21:58:41.941: INFO: all replica sets need to contain the pod-template-hash label Apr 6 21:58:41.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807120, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 21:58:43.983: INFO: all replica sets need to contain the pod-template-hash label Apr 6 21:58:43.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807120, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 21:58:45.941: INFO: all replica sets need to contain the pod-template-hash label Apr 6 21:58:45.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807120, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 21:58:47.942: INFO: all replica sets need to contain the pod-template-hash label Apr 6 21:58:47.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807120, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 21:58:49.942: INFO: all replica sets need to contain the pod-template-hash label Apr 6 21:58:49.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807120, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807115, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 21:58:51.945: INFO: Apr 6 21:58:51.945: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 6 21:58:51.951: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4574 /apis/apps/v1/namespaces/deployment-4574/deployments/test-rollover-deployment 4749733c-18f1-434f-b8c8-b8ae39db8ea1 5988200 2 2020-04-06 21:58:35 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029704a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-06 21:58:35 +0000 UTC,LastTransitionTime:2020-04-06 21:58:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-06 21:58:51 +0000 UTC,LastTransitionTime:2020-04-06 21:58:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 6 21:58:51.954: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4574 /apis/apps/v1/namespaces/deployment-4574/replicasets/test-rollover-deployment-574d6dfbff ec2b4526-2b21-41c7-870e-b150ace5f65a 5988189 2 2020-04-06 21:58:37 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4749733c-18f1-434f-b8c8-b8ae39db8ea1 0xc004f972d7 0xc004f972d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f97348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 6 21:58:51.954: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 6 21:58:51.954: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4574 /apis/apps/v1/namespaces/deployment-4574/replicasets/test-rollover-controller f23afbab-946c-4aeb-90f0-dacbe5d5acb9 5988199 2 2020-04-06 21:58:28 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4749733c-18f1-434f-b8c8-b8ae39db8ea1 0xc004f971ef 0xc004f97200}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004f97268 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 6 21:58:51.954: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4574 /apis/apps/v1/namespaces/deployment-4574/replicasets/test-rollover-deployment-f6c94f66c 50a0fa28-5a8d-4aec-a2d1-dc09fdbdd978 5988143 2 2020-04-06 21:58:35 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4749733c-18f1-434f-b8c8-b8ae39db8ea1 0xc004f973b0 0xc004f973b1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f97428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 6 21:58:51.957: INFO: Pod "test-rollover-deployment-574d6dfbff-xvsbl" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-xvsbl test-rollover-deployment-574d6dfbff- deployment-4574 /api/v1/namespaces/deployment-4574/pods/test-rollover-deployment-574d6dfbff-xvsbl 000a5f69-c98a-4d86-b9e7-d91b8224f51f 5988157 0 2020-04-06 21:58:37 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff ec2b4526-2b21-41c7-870e-b150ace5f65a 0xc004f97957 0xc004f97958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vgn46,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vgn46,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vgn46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:58:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:58:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 21:58:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.21,StartTime:2020-04-06 21:58:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 21:58:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2f941b97b781f3d9ee55782c26287e6916aefb38e7119249f009bdf0013094ad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:58:51.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4574" for this suite. • [SLOW TEST:23.506 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":210,"skipped":3463,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:58:51.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 6 21:58:52.039: INFO: Waiting up to 5m0s for pod "pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe" in namespace "emptydir-6058" to be "success or failure" Apr 6 21:58:52.042: INFO: Pod "pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.955764ms Apr 6 21:58:54.051: INFO: Pod "pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0120284s Apr 6 21:58:56.055: INFO: Pod "pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016009621s STEP: Saw pod success Apr 6 21:58:56.055: INFO: Pod "pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe" satisfied condition "success or failure" Apr 6 21:58:56.058: INFO: Trying to get logs from node jerma-worker pod pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe container test-container: STEP: delete the pod Apr 6 21:58:56.079: INFO: Waiting for pod pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe to disappear Apr 6 21:58:56.094: INFO: Pod pod-82bf7a56-aaf4-47fa-ab61-4f9bba2e4bfe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:58:56.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6058" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3473,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:58:56.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2869.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2869.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 116.22.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.22.116_udp@PTR;check="$$(dig +tcp +noall +answer +search 116.22.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.22.116_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2869.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2869.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2869.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2869.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2869.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 116.22.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.22.116_udp@PTR;check="$$(dig +tcp +noall +answer +search 116.22.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.22.116_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 21:59:02.376: INFO: Unable to read wheezy_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.406: INFO: Unable to read jessie_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.411: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.414: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:02.567: INFO: Lookups using dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9 failed for: [wheezy_udp@dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_udp@dns-test-service.dns-2869.svc.cluster.local jessie_tcp@dns-test-service.dns-2869.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local] Apr 6 21:59:07.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.580: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.582: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.584: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.601: INFO: Unable to read jessie_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.606: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.609: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:07.628: INFO: Lookups using dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9 failed for: [wheezy_udp@dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_udp@dns-test-service.dns-2869.svc.cluster.local jessie_tcp@dns-test-service.dns-2869.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local] Apr 6 21:59:12.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.580: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.583: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.606: INFO: Unable to read jessie_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.609: INFO: Unable to read jessie_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:12.633: INFO: Lookups using dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9 failed for: [wheezy_udp@dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_udp@dns-test-service.dns-2869.svc.cluster.local jessie_tcp@dns-test-service.dns-2869.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local] Apr 6 21:59:17.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.579: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.583: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.607: INFO: Unable to read jessie_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.610: INFO: Unable to read jessie_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.616: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:17.635: INFO: Lookups using dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9 failed for: [wheezy_udp@dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_udp@dns-test-service.dns-2869.svc.cluster.local jessie_tcp@dns-test-service.dns-2869.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local] Apr 6 21:59:22.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.579: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.582: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.605: INFO: Unable to read jessie_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.608: INFO: Unable to read jessie_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.611: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.614: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:22.634: INFO: Lookups using dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9 failed for: [wheezy_udp@dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_udp@dns-test-service.dns-2869.svc.cluster.local jessie_tcp@dns-test-service.dns-2869.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local] Apr 6 21:59:27.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.580: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.583: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.604: INFO: Unable to read jessie_udp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.609: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.612: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local from pod dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9: the server could not find the requested resource (get pods dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9) Apr 6 21:59:27.628: INFO: Lookups using dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9 failed for: [wheezy_udp@dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@dns-test-service.dns-2869.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_udp@dns-test-service.dns-2869.svc.cluster.local jessie_tcp@dns-test-service.dns-2869.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2869.svc.cluster.local] Apr 6 21:59:32.633: INFO: DNS probes using dns-2869/dns-test-13002c8c-9f01-4823-ba86-ec814964b7a9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:59:32.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2869" for this suite. • [SLOW TEST:36.805 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":212,"skipped":3484,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:59:32.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6238 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6238 STEP: creating replication controller externalsvc in namespace services-6238 I0406 21:59:33.409803 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6238, replica count: 2 I0406 21:59:36.460317 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 21:59:39.460545 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 6 21:59:39.518: INFO: Creating new exec pod Apr 6 21:59:43.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6238 execpodvvqql -- /bin/sh -x -c nslookup nodeport-service' Apr 6 21:59:43.819: INFO: stderr: "I0406 21:59:43.727721 2949 log.go:172] (0xc0000f4e70) (0xc0009ee140) Create stream\nI0406 21:59:43.727796 2949 log.go:172] (0xc0000f4e70) (0xc0009ee140) Stream added, broadcasting: 1\nI0406 21:59:43.730947 2949 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0406 21:59:43.731001 2949 log.go:172] (0xc0000f4e70) (0xc000221540) Create stream\nI0406 21:59:43.731024 2949 log.go:172] (0xc0000f4e70) (0xc000221540) Stream added, broadcasting: 3\nI0406 21:59:43.732144 2949 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0406 21:59:43.732181 2949 log.go:172] (0xc0000f4e70) (0xc0009ee1e0) Create stream\nI0406 21:59:43.732199 2949 log.go:172] (0xc0000f4e70) (0xc0009ee1e0) Stream added, broadcasting: 5\nI0406 21:59:43.733044 2949 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0406 21:59:43.808179 2949 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0406 21:59:43.808210 2949 log.go:172] (0xc0009ee1e0) (5) Data frame handling\nI0406 21:59:43.808225 2949 log.go:172] (0xc0009ee1e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0406 21:59:43.812451 2949 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0406 21:59:43.812470 2949 log.go:172] (0xc000221540) (3) Data frame handling\nI0406 21:59:43.812492 2949 log.go:172] (0xc000221540) (3) Data frame sent\nI0406 21:59:43.813372 2949 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0406 21:59:43.813395 2949 log.go:172] (0xc000221540) (3) Data frame handling\nI0406 21:59:43.813404 2949 log.go:172] (0xc000221540) (3) Data frame sent\nI0406 21:59:43.813785 2949 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0406 21:59:43.813807 2949 log.go:172] (0xc0009ee1e0) (5) Data frame handling\nI0406 21:59:43.814024 2949 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0406 21:59:43.814048 2949 log.go:172] (0xc000221540) (3) Data frame handling\nI0406 21:59:43.815850 2949 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0406 21:59:43.815869 2949 log.go:172] (0xc0009ee140) (1) Data frame handling\nI0406 21:59:43.815888 2949 log.go:172] (0xc0009ee140) (1) Data frame sent\nI0406 21:59:43.815988 2949 log.go:172] (0xc0000f4e70) (0xc0009ee140) Stream removed, broadcasting: 1\nI0406 21:59:43.816050 2949 log.go:172] (0xc0000f4e70) Go away received\nI0406 21:59:43.816320 2949 log.go:172] (0xc0000f4e70) (0xc0009ee140) Stream removed, broadcasting: 1\nI0406 21:59:43.816333 2949 log.go:172] (0xc0000f4e70) (0xc000221540) Stream removed, broadcasting: 3\nI0406 21:59:43.816340 2949 log.go:172] (0xc0000f4e70) (0xc0009ee1e0) Stream removed, broadcasting: 5\n" Apr 6 21:59:43.819: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6238.svc.cluster.local\tcanonical name = externalsvc.services-6238.svc.cluster.local.\nName:\texternalsvc.services-6238.svc.cluster.local\nAddress: 10.102.187.147\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6238, will wait for the garbage collector to delete the pods Apr 6 21:59:43.879: INFO: Deleting ReplicationController externalsvc took: 6.815631ms Apr 6 21:59:44.179: INFO: Terminating ReplicationController externalsvc pods took: 300.267824ms Apr 6 21:59:59.314: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 21:59:59.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6238" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.444 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":213,"skipped":3505,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 21:59:59.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 21:59:59.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8" in namespace "downward-api-5630" to be "success or failure" Apr 6 21:59:59.433: INFO: Pod "downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625878ms Apr 6 22:00:01.437: INFO: Pod "downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007810766s Apr 6 22:00:03.441: INFO: Pod "downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011466396s STEP: Saw pod success Apr 6 22:00:03.441: INFO: Pod "downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8" satisfied condition "success or failure" Apr 6 22:00:03.444: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8 container client-container: STEP: delete the pod Apr 6 22:00:03.465: INFO: Waiting for pod downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8 to disappear Apr 6 22:00:03.469: INFO: Pod downwardapi-volume-faa11ea6-e88e-4c37-9691-1b2c30ea7fb8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:00:03.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5630" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3519,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:00:03.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-15188548-9877-40b4-b2f8-a9a279c32d1b STEP: Creating secret with name s-test-opt-upd-c5b9ec5b-c064-4c50-8582-20332787453b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-15188548-9877-40b4-b2f8-a9a279c32d1b STEP: Updating secret s-test-opt-upd-c5b9ec5b-c064-4c50-8582-20332787453b STEP: Creating secret with name s-test-opt-create-beb6aaa6-2665-4c05-8057-26056c28909a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:00:13.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5372" for this suite. • [SLOW TEST:10.250 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3538,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:00:13.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 6 22:00:13.817: INFO: Waiting up to 5m0s for pod "client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82" in namespace "containers-5888" to be "success or failure" Apr 6 22:00:13.842: INFO: Pod "client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82": Phase="Pending", Reason="", readiness=false. Elapsed: 24.663132ms Apr 6 22:00:15.847: INFO: Pod "client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029183352s Apr 6 22:00:17.851: INFO: Pod "client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033596151s STEP: Saw pod success Apr 6 22:00:17.851: INFO: Pod "client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82" satisfied condition "success or failure" Apr 6 22:00:17.854: INFO: Trying to get logs from node jerma-worker pod client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82 container test-container: STEP: delete the pod Apr 6 22:00:17.887: INFO: Waiting for pod client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82 to disappear Apr 6 22:00:17.895: INFO: Pod client-containers-8a236466-23a6-48b3-8e67-1f3cc4ab0d82 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:00:17.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5888" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3547,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:00:17.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3359 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3359 STEP: Deleting pre-stop pod Apr 6 22:00:31.007: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:00:31.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3359" for this suite. • [SLOW TEST:13.139 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":217,"skipped":3554,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:00:31.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 6 22:00:31.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2335' Apr 6 22:00:31.352: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 6 22:00:31.352: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 6 22:00:31.429: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-6fg8f] Apr 6 22:00:31.429: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-6fg8f" in namespace "kubectl-2335" to be "running and ready" Apr 6 22:00:31.466: INFO: Pod "e2e-test-httpd-rc-6fg8f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.485953ms Apr 6 22:00:33.470: INFO: Pod "e2e-test-httpd-rc-6fg8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041240317s Apr 6 22:00:35.473: INFO: Pod "e2e-test-httpd-rc-6fg8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.044652848s Apr 6 22:00:35.474: INFO: Pod "e2e-test-httpd-rc-6fg8f" satisfied condition "running and ready" Apr 6 22:00:35.474: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-6fg8f] Apr 6 22:00:35.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2335' Apr 6 22:00:35.597: INFO: stderr: "" Apr 6 22:00:35.597: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.99. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.99. Set the 'ServerName' directive globally to suppress this message\n[Mon Apr 06 22:00:33.967171 2020] [mpm_event:notice] [pid 1:tid 140414430137192] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Apr 06 22:00:33.967222 2020] [core:notice] [pid 1:tid 140414430137192] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 6 22:00:35.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2335' Apr 6 22:00:35.726: INFO: stderr: "" Apr 6 22:00:35.726: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:00:35.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2335" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":218,"skipped":3563,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:00:35.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 22:00:36.934: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 22:00:38.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807236, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807236, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807237, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807236, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 22:00:40.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807236, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807236, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807237, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807236, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 22:00:43.975: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:00:44.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7365" for this suite. STEP: Destroying namespace "webhook-7365-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.392 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":219,"skipped":3570,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:00:44.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 22:00:45.142: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 22:00:47.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807245, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807245, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807245, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807245, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 22:00:50.192: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:01:02.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3079" for this suite. STEP: Destroying namespace "webhook-3079-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.550 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":220,"skipped":3585,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:01:02.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:01:02.810: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:01:07.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5620" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3606,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:01:07.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:01:07.261: INFO: Creating deployment "test-recreate-deployment" Apr 6 22:01:07.275: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 6 22:01:07.329: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 6 22:01:09.384: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 6 22:01:09.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807267, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807267, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807267, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807267, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 22:01:11.391: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 6 22:01:11.398: INFO: Updating deployment test-recreate-deployment Apr 6 22:01:11.399: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 6 22:01:11.819: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4403 /apis/apps/v1/namespaces/deployment-4403/deployments/test-recreate-deployment 5e905bec-10fe-4717-bc13-f050a1b8c7c0 5989186 2 2020-04-06 22:01:07 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048aa788 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-06 22:01:11 +0000 UTC,LastTransitionTime:2020-04-06 22:01:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-06 22:01:11 +0000 UTC,LastTransitionTime:2020-04-06 22:01:07 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 6 22:01:11.823: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4403 /apis/apps/v1/namespaces/deployment-4403/replicasets/test-recreate-deployment-5f94c574ff 0a7830b3-f037-4795-a349-25c887087d7f 5989185 1 2020-04-06 22:01:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5e905bec-10fe-4717-bc13-f050a1b8c7c0 0xc0048aab27 0xc0048aab28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048aab88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 6 22:01:11.823: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 6 22:01:11.823: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-4403 /apis/apps/v1/namespaces/deployment-4403/replicasets/test-recreate-deployment-799c574856 04227828-428a-49ce-a031-f188250a35ae 5989175 2 2020-04-06 22:01:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5e905bec-10fe-4717-bc13-f050a1b8c7c0 0xc0048aabf7 0xc0048aabf8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048aac68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 6 22:01:11.853: INFO: Pod "test-recreate-deployment-5f94c574ff-zrnpk" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-zrnpk test-recreate-deployment-5f94c574ff- deployment-4403 /api/v1/namespaces/deployment-4403/pods/test-recreate-deployment-5f94c574ff-zrnpk 7fb9e999-0596-4737-9b69-a41b3608cd90 5989187 0 2020-04-06 22:01:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 0a7830b3-f037-4795-a349-25c887087d7f 0xc00454a447 0xc00454a448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-55czc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-55czc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-55czc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:01:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:01:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:01:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:01:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-06 22:01:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:01:11.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4403" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":222,"skipped":3642,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:01:11.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 6 22:01:11.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2313' Apr 6 22:01:12.016: INFO: stderr: "" Apr 6 22:01:12.016: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 6 22:01:12.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2313' Apr 6 22:01:16.363: INFO: stderr: "" Apr 6 22:01:16.363: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:01:16.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2313" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":223,"skipped":3650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:01:16.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 22:01:16.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0" in namespace "projected-3674" to be "success or failure" Apr 6 22:01:16.424: INFO: Pod "downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.878316ms Apr 6 22:01:18.428: INFO: Pod "downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007933172s Apr 6 22:01:20.432: INFO: Pod "downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012293641s STEP: Saw pod success Apr 6 22:01:20.432: INFO: Pod "downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0" satisfied condition "success or failure" Apr 6 22:01:20.435: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0 container client-container: STEP: delete the pod Apr 6 22:01:20.485: INFO: Waiting for pod downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0 to disappear Apr 6 22:01:20.490: INFO: Pod downwardapi-volume-d41f4787-ca12-443d-bfec-29b769282cc0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:01:20.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3674" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3679,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:01:20.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7202 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 6 22:01:20.588: INFO: Found 0 stateful pods, waiting for 3 Apr 6 22:01:30.592: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:01:30.592: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:01:30.592: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 6 22:01:40.592: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:01:40.592: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:01:40.592: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 6 22:01:40.616: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 6 22:01:50.677: INFO: Updating stateful set ss2 Apr 6 22:01:50.703: INFO: Waiting for Pod statefulset-7202/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 6 22:02:01.689: INFO: Found 2 stateful pods, waiting for 3 Apr 6 22:02:11.694: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:02:11.694: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:02:11.694: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 6 22:02:11.718: INFO: Updating stateful set ss2 Apr 6 22:02:11.737: INFO: Waiting for Pod statefulset-7202/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 6 22:02:21.761: INFO: Updating stateful set ss2 Apr 6 22:02:21.774: INFO: Waiting for StatefulSet statefulset-7202/ss2 to complete update Apr 6 22:02:21.774: INFO: Waiting for Pod statefulset-7202/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 6 22:02:31.781: INFO: Deleting all statefulset in ns statefulset-7202 Apr 6 22:02:31.783: INFO: Scaling statefulset ss2 to 0 Apr 6 22:02:51.801: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 22:02:51.804: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:02:51.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7202" for this suite. • [SLOW TEST:91.339 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":225,"skipped":3689,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:02:51.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 22:02:51.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283" in namespace "downward-api-4716" to be "success or failure" Apr 6 22:02:51.918: INFO: Pod "downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283": Phase="Pending", Reason="", readiness=false. Elapsed: 10.689567ms Apr 6 22:02:53.922: INFO: Pod "downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014836723s Apr 6 22:02:55.927: INFO: Pod "downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019291656s STEP: Saw pod success Apr 6 22:02:55.927: INFO: Pod "downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283" satisfied condition "success or failure" Apr 6 22:02:55.930: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283 container client-container: STEP: delete the pod Apr 6 22:02:55.962: INFO: Waiting for pod downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283 to disappear Apr 6 22:02:56.032: INFO: Pod downwardapi-volume-b9f6a5be-53f4-468e-ae3f-efdb3cb60283 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:02:56.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4716" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3723,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:02:56.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:02:56.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4482' Apr 6 22:02:56.420: INFO: stderr: "" Apr 6 22:02:56.420: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 6 22:02:56.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4482' Apr 6 22:02:56.699: INFO: stderr: "" Apr 6 22:02:56.699: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 6 22:02:57.733: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 22:02:57.733: INFO: Found 0 / 1 Apr 6 22:02:58.704: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 22:02:58.704: INFO: Found 0 / 1 Apr 6 22:02:59.703: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 22:02:59.703: INFO: Found 1 / 1 Apr 6 22:02:59.703: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 6 22:02:59.709: INFO: Selector matched 1 pods for map[app:agnhost] Apr 6 22:02:59.709: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 6 22:02:59.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-kxg6v --namespace=kubectl-4482' Apr 6 22:02:59.812: INFO: stderr: "" Apr 6 22:02:59.812: INFO: stdout: "Name: agnhost-master-kxg6v\nNamespace: kubectl-4482\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Mon, 06 Apr 2020 22:02:56 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.35\nIPs:\n IP: 10.244.1.35\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://9a6166d48a22ff8a89a02fc5b5c2e1586ab96b0e7cabcba9fa465ac5c24c66c3\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 06 Apr 2020 22:02:58 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hkrqv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hkrqv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hkrqv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-4482/agnhost-master-kxg6v to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" Apr 6 22:02:59.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4482' Apr 6 22:02:59.940: INFO: stderr: "" Apr 6 22:02:59.940: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4482\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-kxg6v\n" Apr 6 22:02:59.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4482' Apr 6 22:03:00.049: INFO: stderr: "" Apr 6 22:03:00.049: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4482\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.101.214.41\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.35:6379\nSession Affinity: None\nEvents: \n" Apr 6 22:03:00.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 6 22:03:00.181: INFO: stderr: "" Apr 6 22:03:00.181: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 06 Apr 2020 22:02:55 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 06 Apr 2020 22:00:36 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 06 Apr 2020 22:00:36 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 06 Apr 2020 22:00:36 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 06 Apr 2020 22:00:36 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 22d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 22d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 22d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 22d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 6 22:03:00.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4482' Apr 6 22:03:00.298: INFO: stderr: "" Apr 6 22:03:00.298: INFO: stdout: "Name: kubectl-4482\nLabels: e2e-framework=kubectl\n e2e-run=e3428125-b2df-4352-968e-00ca0ce59725\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:03:00.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4482" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":227,"skipped":3743,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:03:00.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4263 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-4263 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4263 Apr 6 22:03:00.403: INFO: Found 0 stateful pods, waiting for 1 Apr 6 22:03:10.407: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 6 22:03:10.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4263 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:03:10.703: INFO: stderr: "I0406 22:03:10.543134 3222 log.go:172] (0xc000a99340) (0xc000bd06e0) Create stream\nI0406 22:03:10.543190 3222 log.go:172] (0xc000a99340) (0xc000bd06e0) Stream added, broadcasting: 1\nI0406 22:03:10.546923 3222 log.go:172] (0xc000a99340) Reply frame received for 1\nI0406 22:03:10.546993 3222 log.go:172] (0xc000a99340) (0xc000a88460) Create stream\nI0406 22:03:10.547016 3222 log.go:172] (0xc000a99340) (0xc000a88460) Stream added, broadcasting: 3\nI0406 22:03:10.548117 3222 log.go:172] (0xc000a99340) Reply frame received for 3\nI0406 22:03:10.548148 3222 log.go:172] (0xc000a99340) (0xc0006a66e0) Create stream\nI0406 22:03:10.548162 3222 log.go:172] (0xc000a99340) (0xc0006a66e0) Stream added, broadcasting: 5\nI0406 22:03:10.549047 3222 log.go:172] (0xc000a99340) Reply frame received for 5\nI0406 22:03:10.633629 3222 log.go:172] (0xc000a99340) Data frame received for 5\nI0406 22:03:10.633660 3222 log.go:172] (0xc0006a66e0) (5) Data frame handling\nI0406 22:03:10.633682 3222 log.go:172] (0xc0006a66e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:03:10.697631 3222 log.go:172] (0xc000a99340) Data frame received for 5\nI0406 22:03:10.697684 3222 log.go:172] (0xc0006a66e0) (5) Data frame handling\nI0406 22:03:10.697711 3222 log.go:172] (0xc000a99340) Data frame received for 1\nI0406 22:03:10.697723 3222 log.go:172] (0xc000bd06e0) (1) Data frame handling\nI0406 22:03:10.697730 3222 log.go:172] (0xc000bd06e0) (1) Data frame sent\nI0406 22:03:10.697752 3222 log.go:172] (0xc000a99340) Data frame received for 3\nI0406 22:03:10.697785 3222 log.go:172] (0xc000a99340) (0xc000bd06e0) Stream removed, broadcasting: 1\nI0406 22:03:10.697856 3222 log.go:172] (0xc000a88460) (3) Data frame handling\nI0406 22:03:10.697883 3222 log.go:172] (0xc000a88460) (3) Data frame sent\nI0406 22:03:10.697891 3222 log.go:172] (0xc000a99340) Data frame received for 3\nI0406 22:03:10.697896 3222 log.go:172] (0xc000a88460) (3) Data frame handling\nI0406 22:03:10.697905 3222 log.go:172] (0xc000a99340) Go away received\nI0406 22:03:10.698302 3222 log.go:172] (0xc000a99340) (0xc000bd06e0) Stream removed, broadcasting: 1\nI0406 22:03:10.698346 3222 log.go:172] (0xc000a99340) (0xc000a88460) Stream removed, broadcasting: 3\nI0406 22:03:10.698376 3222 log.go:172] (0xc000a99340) (0xc0006a66e0) Stream removed, broadcasting: 5\n" Apr 6 22:03:10.703: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:03:10.703: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:03:10.707: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 6 22:03:20.712: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:03:20.712: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 22:03:20.727: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:20.727: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:20.727: INFO: Apr 6 22:03:20.727: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 6 22:03:21.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992765822s Apr 6 22:03:22.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968331163s Apr 6 22:03:23.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963623473s Apr 6 22:03:24.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958624094s Apr 6 22:03:25.770: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954045025s Apr 6 22:03:26.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.949489098s Apr 6 22:03:27.781: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944223395s Apr 6 22:03:28.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.939274556s Apr 6 22:03:29.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 934.309704ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4263 Apr 6 22:03:30.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4263 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 22:03:31.025: INFO: stderr: "I0406 22:03:30.921880 3241 log.go:172] (0xc000213130) (0xc0007f8000) Create stream\nI0406 22:03:30.921931 3241 log.go:172] (0xc000213130) (0xc0007f8000) Stream added, broadcasting: 1\nI0406 22:03:30.924362 3241 log.go:172] (0xc000213130) Reply frame received for 1\nI0406 22:03:30.924392 3241 log.go:172] (0xc000213130) (0xc0005ada40) Create stream\nI0406 22:03:30.924403 3241 log.go:172] (0xc000213130) (0xc0005ada40) Stream added, broadcasting: 3\nI0406 22:03:30.925380 3241 log.go:172] (0xc000213130) Reply frame received for 3\nI0406 22:03:30.925415 3241 log.go:172] (0xc000213130) (0xc00022c000) Create stream\nI0406 22:03:30.925425 3241 log.go:172] (0xc000213130) (0xc00022c000) Stream added, broadcasting: 5\nI0406 22:03:30.926412 3241 log.go:172] (0xc000213130) Reply frame received for 5\nI0406 22:03:31.017828 3241 log.go:172] (0xc000213130) Data frame received for 3\nI0406 22:03:31.017886 3241 log.go:172] (0xc0005ada40) (3) Data frame handling\nI0406 22:03:31.017905 3241 log.go:172] (0xc0005ada40) (3) Data frame sent\nI0406 22:03:31.017916 3241 log.go:172] (0xc000213130) Data frame received for 3\nI0406 22:03:31.017934 3241 log.go:172] (0xc0005ada40) (3) Data frame handling\nI0406 22:03:31.017965 3241 log.go:172] (0xc000213130) Data frame received for 5\nI0406 22:03:31.017991 3241 log.go:172] (0xc00022c000) (5) Data frame handling\nI0406 22:03:31.018031 3241 log.go:172] (0xc00022c000) (5) Data frame sent\nI0406 22:03:31.018051 3241 log.go:172] (0xc000213130) Data frame received for 5\nI0406 22:03:31.018062 3241 log.go:172] (0xc00022c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 22:03:31.019695 3241 log.go:172] (0xc000213130) Data frame received for 1\nI0406 22:03:31.019729 3241 log.go:172] (0xc0007f8000) (1) Data frame handling\nI0406 22:03:31.019763 3241 log.go:172] (0xc0007f8000) (1) Data frame sent\nI0406 22:03:31.019803 3241 log.go:172] (0xc000213130) (0xc0007f8000) Stream removed, broadcasting: 1\nI0406 22:03:31.019939 3241 log.go:172] (0xc000213130) Go away received\nI0406 22:03:31.020370 3241 log.go:172] (0xc000213130) (0xc0007f8000) Stream removed, broadcasting: 1\nI0406 22:03:31.020403 3241 log.go:172] (0xc000213130) (0xc0005ada40) Stream removed, broadcasting: 3\nI0406 22:03:31.020424 3241 log.go:172] (0xc000213130) (0xc00022c000) Stream removed, broadcasting: 5\n" Apr 6 22:03:31.025: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 22:03:31.025: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 22:03:31.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4263 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 22:03:31.262: INFO: stderr: "I0406 22:03:31.171659 3262 log.go:172] (0xc000104a50) (0xc0009ea140) Create stream\nI0406 22:03:31.171712 3262 log.go:172] (0xc000104a50) (0xc0009ea140) Stream added, broadcasting: 1\nI0406 22:03:31.181963 3262 log.go:172] (0xc000104a50) Reply frame received for 1\nI0406 22:03:31.182020 3262 log.go:172] (0xc000104a50) (0xc0002a7400) Create stream\nI0406 22:03:31.182032 3262 log.go:172] (0xc000104a50) (0xc0002a7400) Stream added, broadcasting: 3\nI0406 22:03:31.183978 3262 log.go:172] (0xc000104a50) Reply frame received for 3\nI0406 22:03:31.184009 3262 log.go:172] (0xc000104a50) (0xc0002a74a0) Create stream\nI0406 22:03:31.184023 3262 log.go:172] (0xc000104a50) (0xc0002a74a0) Stream added, broadcasting: 5\nI0406 22:03:31.184717 3262 log.go:172] (0xc000104a50) Reply frame received for 5\nI0406 22:03:31.236899 3262 log.go:172] (0xc000104a50) Data frame received for 5\nI0406 22:03:31.236936 3262 log.go:172] (0xc0002a74a0) (5) Data frame handling\nI0406 22:03:31.236956 3262 log.go:172] (0xc0002a74a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 22:03:31.253660 3262 log.go:172] (0xc000104a50) Data frame received for 5\nI0406 22:03:31.253685 3262 log.go:172] (0xc0002a74a0) (5) Data frame handling\nI0406 22:03:31.253695 3262 log.go:172] (0xc0002a74a0) (5) Data frame sent\nI0406 22:03:31.253703 3262 log.go:172] (0xc000104a50) Data frame received for 5\nI0406 22:03:31.253710 3262 log.go:172] (0xc0002a74a0) (5) Data frame handling\nI0406 22:03:31.253722 3262 log.go:172] (0xc000104a50) Data frame received for 3\nI0406 22:03:31.253726 3262 log.go:172] (0xc0002a7400) (3) Data frame handling\nI0406 22:03:31.253731 3262 log.go:172] (0xc0002a7400) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0406 22:03:31.253800 3262 log.go:172] (0xc000104a50) Data frame received for 3\nI0406 22:03:31.253846 3262 log.go:172] (0xc0002a7400) (3) Data frame handling\nI0406 22:03:31.253878 3262 log.go:172] (0xc0002a74a0) (5) Data frame sent\nI0406 22:03:31.254067 3262 log.go:172] (0xc000104a50) Data frame received for 5\nI0406 22:03:31.254087 3262 log.go:172] (0xc0002a74a0) (5) Data frame handling\nI0406 22:03:31.256262 3262 log.go:172] (0xc000104a50) Data frame received for 1\nI0406 22:03:31.256287 3262 log.go:172] (0xc0009ea140) (1) Data frame handling\nI0406 22:03:31.256300 3262 log.go:172] (0xc0009ea140) (1) Data frame sent\nI0406 22:03:31.256314 3262 log.go:172] (0xc000104a50) (0xc0009ea140) Stream removed, broadcasting: 1\nI0406 22:03:31.256338 3262 log.go:172] (0xc000104a50) Go away received\nI0406 22:03:31.256964 3262 log.go:172] (0xc000104a50) (0xc0009ea140) Stream removed, broadcasting: 1\nI0406 22:03:31.257003 3262 log.go:172] (0xc000104a50) (0xc0002a7400) Stream removed, broadcasting: 3\nI0406 22:03:31.257026 3262 log.go:172] (0xc000104a50) (0xc0002a74a0) Stream removed, broadcasting: 5\n" Apr 6 22:03:31.262: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 22:03:31.262: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 22:03:31.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4263 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 22:03:31.478: INFO: stderr: "I0406 22:03:31.412365 3285 log.go:172] (0xc0009da2c0) (0xc00085c140) Create stream\nI0406 22:03:31.412419 3285 log.go:172] (0xc0009da2c0) (0xc00085c140) Stream added, broadcasting: 1\nI0406 22:03:31.416447 3285 log.go:172] (0xc0009da2c0) Reply frame received for 1\nI0406 22:03:31.416507 3285 log.go:172] (0xc0009da2c0) (0xc0008aa000) Create stream\nI0406 22:03:31.416531 3285 log.go:172] (0xc0009da2c0) (0xc0008aa000) Stream added, broadcasting: 3\nI0406 22:03:31.417630 3285 log.go:172] (0xc0009da2c0) Reply frame received for 3\nI0406 22:03:31.417670 3285 log.go:172] (0xc0009da2c0) (0xc000958000) Create stream\nI0406 22:03:31.417683 3285 log.go:172] (0xc0009da2c0) (0xc000958000) Stream added, broadcasting: 5\nI0406 22:03:31.418550 3285 log.go:172] (0xc0009da2c0) Reply frame received for 5\nI0406 22:03:31.470935 3285 log.go:172] (0xc0009da2c0) Data frame received for 5\nI0406 22:03:31.470980 3285 log.go:172] (0xc000958000) (5) Data frame handling\nI0406 22:03:31.470994 3285 log.go:172] (0xc000958000) (5) Data frame sent\nI0406 22:03:31.471005 3285 log.go:172] (0xc0009da2c0) Data frame received for 5\nI0406 22:03:31.471015 3285 log.go:172] (0xc000958000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0406 22:03:31.471054 3285 log.go:172] (0xc0009da2c0) Data frame received for 3\nI0406 22:03:31.471069 3285 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0406 22:03:31.471085 3285 log.go:172] (0xc0008aa000) (3) Data frame sent\nI0406 22:03:31.471106 3285 log.go:172] (0xc0009da2c0) Data frame received for 3\nI0406 22:03:31.471131 3285 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0406 22:03:31.472843 3285 log.go:172] (0xc0009da2c0) Data frame received for 1\nI0406 22:03:31.472878 3285 log.go:172] (0xc00085c140) (1) Data frame handling\nI0406 22:03:31.472901 3285 log.go:172] (0xc00085c140) (1) Data frame sent\nI0406 22:03:31.472918 3285 log.go:172] (0xc0009da2c0) (0xc00085c140) Stream removed, broadcasting: 1\nI0406 22:03:31.472936 3285 log.go:172] (0xc0009da2c0) Go away received\nI0406 22:03:31.473583 3285 log.go:172] (0xc0009da2c0) (0xc00085c140) Stream removed, broadcasting: 1\nI0406 22:03:31.473603 3285 log.go:172] (0xc0009da2c0) (0xc0008aa000) Stream removed, broadcasting: 3\nI0406 22:03:31.473614 3285 log.go:172] (0xc0009da2c0) (0xc000958000) Stream removed, broadcasting: 5\n" Apr 6 22:03:31.478: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 22:03:31.478: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 22:03:31.482: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 6 22:03:41.486: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:03:41.486: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:03:41.486: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 6 22:03:41.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4263 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:03:41.705: INFO: stderr: "I0406 22:03:41.614338 3306 log.go:172] (0xc0008f20b0) (0xc00024b540) Create stream\nI0406 22:03:41.614411 3306 log.go:172] (0xc0008f20b0) (0xc00024b540) Stream added, broadcasting: 1\nI0406 22:03:41.617399 3306 log.go:172] (0xc0008f20b0) Reply frame received for 1\nI0406 22:03:41.617447 3306 log.go:172] (0xc0008f20b0) (0xc0008de000) Create stream\nI0406 22:03:41.617464 3306 log.go:172] (0xc0008f20b0) (0xc0008de000) Stream added, broadcasting: 3\nI0406 22:03:41.618412 3306 log.go:172] (0xc0008f20b0) Reply frame received for 3\nI0406 22:03:41.618442 3306 log.go:172] (0xc0008f20b0) (0xc000934000) Create stream\nI0406 22:03:41.618452 3306 log.go:172] (0xc0008f20b0) (0xc000934000) Stream added, broadcasting: 5\nI0406 22:03:41.619334 3306 log.go:172] (0xc0008f20b0) Reply frame received for 5\nI0406 22:03:41.698055 3306 log.go:172] (0xc0008f20b0) Data frame received for 3\nI0406 22:03:41.698105 3306 log.go:172] (0xc0008de000) (3) Data frame handling\nI0406 22:03:41.698136 3306 log.go:172] (0xc0008de000) (3) Data frame sent\nI0406 22:03:41.698150 3306 log.go:172] (0xc0008f20b0) Data frame received for 3\nI0406 22:03:41.698160 3306 log.go:172] (0xc0008de000) (3) Data frame handling\nI0406 22:03:41.698233 3306 log.go:172] (0xc0008f20b0) Data frame received for 5\nI0406 22:03:41.698244 3306 log.go:172] (0xc000934000) (5) Data frame handling\nI0406 22:03:41.698256 3306 log.go:172] (0xc000934000) (5) Data frame sent\nI0406 22:03:41.698287 3306 log.go:172] (0xc0008f20b0) Data frame received for 5\nI0406 22:03:41.698304 3306 log.go:172] (0xc000934000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:03:41.700448 3306 log.go:172] (0xc0008f20b0) Data frame received for 1\nI0406 22:03:41.700482 3306 log.go:172] (0xc00024b540) (1) Data frame handling\nI0406 22:03:41.700515 3306 log.go:172] (0xc00024b540) (1) Data frame sent\nI0406 22:03:41.700536 3306 log.go:172] (0xc0008f20b0) (0xc00024b540) Stream removed, broadcasting: 1\nI0406 22:03:41.700893 3306 log.go:172] (0xc0008f20b0) (0xc00024b540) Stream removed, broadcasting: 1\nI0406 22:03:41.700912 3306 log.go:172] (0xc0008f20b0) (0xc0008de000) Stream removed, broadcasting: 3\nI0406 22:03:41.701038 3306 log.go:172] (0xc0008f20b0) Go away received\nI0406 22:03:41.701294 3306 log.go:172] (0xc0008f20b0) (0xc000934000) Stream removed, broadcasting: 5\n" Apr 6 22:03:41.705: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:03:41.705: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:03:41.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4263 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:03:41.936: INFO: stderr: "I0406 22:03:41.831185 3329 log.go:172] (0xc000205130) (0xc0007301e0) Create stream\nI0406 22:03:41.831239 3329 log.go:172] (0xc000205130) (0xc0007301e0) Stream added, broadcasting: 1\nI0406 22:03:41.834044 3329 log.go:172] (0xc000205130) Reply frame received for 1\nI0406 22:03:41.834127 3329 log.go:172] (0xc000205130) (0xc0004f7b80) Create stream\nI0406 22:03:41.834159 3329 log.go:172] (0xc000205130) (0xc0004f7b80) Stream added, broadcasting: 3\nI0406 22:03:41.835196 3329 log.go:172] (0xc000205130) Reply frame received for 3\nI0406 22:03:41.835221 3329 log.go:172] (0xc000205130) (0xc000730280) Create stream\nI0406 22:03:41.835229 3329 log.go:172] (0xc000205130) (0xc000730280) Stream added, broadcasting: 5\nI0406 22:03:41.836175 3329 log.go:172] (0xc000205130) Reply frame received for 5\nI0406 22:03:41.901898 3329 log.go:172] (0xc000205130) Data frame received for 5\nI0406 22:03:41.901937 3329 log.go:172] (0xc000730280) (5) Data frame handling\nI0406 22:03:41.901957 3329 log.go:172] (0xc000730280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:03:41.928442 3329 log.go:172] (0xc000205130) Data frame received for 5\nI0406 22:03:41.928457 3329 log.go:172] (0xc000730280) (5) Data frame handling\nI0406 22:03:41.928477 3329 log.go:172] (0xc000205130) Data frame received for 3\nI0406 22:03:41.928484 3329 log.go:172] (0xc0004f7b80) (3) Data frame handling\nI0406 22:03:41.928494 3329 log.go:172] (0xc0004f7b80) (3) Data frame sent\nI0406 22:03:41.928502 3329 log.go:172] (0xc000205130) Data frame received for 3\nI0406 22:03:41.928508 3329 log.go:172] (0xc0004f7b80) (3) Data frame handling\nI0406 22:03:41.931179 3329 log.go:172] (0xc000205130) Data frame received for 1\nI0406 22:03:41.931192 3329 log.go:172] (0xc0007301e0) (1) Data frame handling\nI0406 22:03:41.931198 3329 log.go:172] (0xc0007301e0) (1) Data frame sent\nI0406 22:03:41.931217 3329 log.go:172] (0xc000205130) (0xc0007301e0) Stream removed, broadcasting: 1\nI0406 22:03:41.931346 3329 log.go:172] (0xc000205130) Go away received\nI0406 22:03:41.931557 3329 log.go:172] (0xc000205130) (0xc0007301e0) Stream removed, broadcasting: 1\nI0406 22:03:41.931570 3329 log.go:172] (0xc000205130) (0xc0004f7b80) Stream removed, broadcasting: 3\nI0406 22:03:41.931575 3329 log.go:172] (0xc000205130) (0xc000730280) Stream removed, broadcasting: 5\n" Apr 6 22:03:41.936: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:03:41.936: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:03:41.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4263 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:03:42.219: INFO: stderr: "I0406 22:03:42.101514 3352 log.go:172] (0xc000960840) (0xc0009c2140) Create stream\nI0406 22:03:42.101568 3352 log.go:172] (0xc000960840) (0xc0009c2140) Stream added, broadcasting: 1\nI0406 22:03:42.104386 3352 log.go:172] (0xc000960840) Reply frame received for 1\nI0406 22:03:42.104450 3352 log.go:172] (0xc000960840) (0xc0002a5360) Create stream\nI0406 22:03:42.104475 3352 log.go:172] (0xc000960840) (0xc0002a5360) Stream added, broadcasting: 3\nI0406 22:03:42.105685 3352 log.go:172] (0xc000960840) Reply frame received for 3\nI0406 22:03:42.105729 3352 log.go:172] (0xc000960840) (0xc0008c4000) Create stream\nI0406 22:03:42.105743 3352 log.go:172] (0xc000960840) (0xc0008c4000) Stream added, broadcasting: 5\nI0406 22:03:42.106797 3352 log.go:172] (0xc000960840) Reply frame received for 5\nI0406 22:03:42.177648 3352 log.go:172] (0xc000960840) Data frame received for 5\nI0406 22:03:42.177674 3352 log.go:172] (0xc0008c4000) (5) Data frame handling\nI0406 22:03:42.177689 3352 log.go:172] (0xc0008c4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:03:42.214010 3352 log.go:172] (0xc000960840) Data frame received for 3\nI0406 22:03:42.214051 3352 log.go:172] (0xc0002a5360) (3) Data frame handling\nI0406 22:03:42.214063 3352 log.go:172] (0xc0002a5360) (3) Data frame sent\nI0406 22:03:42.214071 3352 log.go:172] (0xc000960840) Data frame received for 3\nI0406 22:03:42.214079 3352 log.go:172] (0xc0002a5360) (3) Data frame handling\nI0406 22:03:42.214136 3352 log.go:172] (0xc000960840) Data frame received for 5\nI0406 22:03:42.214171 3352 log.go:172] (0xc0008c4000) (5) Data frame handling\nI0406 22:03:42.215224 3352 log.go:172] (0xc000960840) Data frame received for 1\nI0406 22:03:42.215246 3352 log.go:172] (0xc0009c2140) (1) Data frame handling\nI0406 22:03:42.215259 3352 log.go:172] (0xc0009c2140) (1) Data frame sent\nI0406 22:03:42.215273 3352 log.go:172] (0xc000960840) (0xc0009c2140) Stream removed, broadcasting: 1\nI0406 22:03:42.215294 3352 log.go:172] (0xc000960840) Go away received\nI0406 22:03:42.215782 3352 log.go:172] (0xc000960840) (0xc0009c2140) Stream removed, broadcasting: 1\nI0406 22:03:42.215802 3352 log.go:172] (0xc000960840) (0xc0002a5360) Stream removed, broadcasting: 3\nI0406 22:03:42.215858 3352 log.go:172] (0xc000960840) (0xc0008c4000) Stream removed, broadcasting: 5\n" Apr 6 22:03:42.219: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:03:42.219: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:03:42.219: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 22:03:42.223: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 6 22:03:52.231: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:03:52.231: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:03:52.231: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:03:52.244: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:52.244: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:52.244: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:52.244: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:52.244: INFO: Apr 6 22:03:52.244: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 6 22:03:53.250: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:53.250: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:53.250: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:53.250: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:53.250: INFO: Apr 6 22:03:53.250: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 6 22:03:54.255: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:54.255: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:54.255: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:54.255: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:54.255: INFO: Apr 6 22:03:54.255: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 6 22:03:55.262: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:55.262: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:55.262: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:55.262: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:55.262: INFO: Apr 6 22:03:55.262: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 6 22:03:56.267: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:56.267: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:56.267: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:56.267: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:56.267: INFO: Apr 6 22:03:56.267: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 6 22:03:57.272: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:57.272: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:57.272: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:57.273: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:57.273: INFO: Apr 6 22:03:57.273: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 6 22:03:58.278: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:58.278: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:00 +0000 UTC }] Apr 6 22:03:58.278: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:58.278: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:58.278: INFO: Apr 6 22:03:58.278: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 6 22:03:59.282: INFO: POD NODE PHASE GRACE CONDITIONS Apr 6 22:03:59.282: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:59.282: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-06 22:03:20 +0000 UTC }] Apr 6 22:03:59.282: INFO: Apr 6 22:03:59.282: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 6 22:04:00.287: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.956003774s Apr 6 22:04:01.291: INFO: Verifying statefulset ss doesn't scale past 0 for another 951.567286ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4263 Apr 6 22:04:02.295: INFO: Scaling statefulset ss to 0 Apr 6 22:04:02.305: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 6 22:04:02.308: INFO: Deleting all statefulset in ns statefulset-4263 Apr 6 22:04:02.310: INFO: Scaling statefulset ss to 0 Apr 6 22:04:02.317: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 22:04:02.319: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:04:02.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4263" for this suite. • [SLOW TEST:62.069 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":228,"skipped":3748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:04:02.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:04:02.415: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 6 22:04:02.428: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 6 22:04:07.431: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 6 22:04:07.431: INFO: Creating deployment "test-rolling-update-deployment" Apr 6 22:04:07.435: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 6 22:04:07.461: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 6 22:04:09.468: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 6 22:04:09.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807447, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807447, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807447, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807447, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 6 22:04:11.475: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 6 22:04:11.482: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3357 /apis/apps/v1/namespaces/deployment-3357/deployments/test-rolling-update-deployment 57aeaa20-e40b-4de8-8365-cc79bffdbb4d 5990335 1 2020-04-06 22:04:07 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005422b88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-06 22:04:07 +0000 UTC,LastTransitionTime:2020-04-06 22:04:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-06 22:04:10 +0000 UTC,LastTransitionTime:2020-04-06 22:04:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 6 22:04:11.484: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-3357 /apis/apps/v1/namespaces/deployment-3357/replicasets/test-rolling-update-deployment-67cf4f6444 9d391812-1cdd-428b-a400-b61cef4bf57c 5990324 1 2020-04-06 22:04:07 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 57aeaa20-e40b-4de8-8365-cc79bffdbb4d 0xc005423027 0xc005423028}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005423098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 6 22:04:11.484: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 6 22:04:11.484: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3357 /apis/apps/v1/namespaces/deployment-3357/replicasets/test-rolling-update-controller a8fb9b52-0152-4f08-843c-09e3206911f1 5990333 2 2020-04-06 22:04:02 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 57aeaa20-e40b-4de8-8365-cc79bffdbb4d 0xc005422f3f 0xc005422f50}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005422fb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 6 22:04:11.487: INFO: Pod "test-rolling-update-deployment-67cf4f6444-jskj4" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-jskj4 test-rolling-update-deployment-67cf4f6444- deployment-3357 /api/v1/namespaces/deployment-3357/pods/test-rolling-update-deployment-67cf4f6444-jskj4 43acde3a-9792-41df-ad03-a61a4994aa9e 5990323 0 2020-04-06 22:04:07 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 9d391812-1cdd-428b-a400-b61cef4bf57c 0xc005423557 0xc005423558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wc65t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wc65t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wc65t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.112,StartTime:2020-04-06 22:04:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 22:04:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://ab767fff40464ea69209c9ddb53ec79c0ec9cd70bc886ea6b86891e03a2ac6ce,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:04:11.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3357" for this suite. • [SLOW TEST:9.117 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":229,"skipped":3777,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:04:11.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 6 22:04:11.577: INFO: Waiting up to 5m0s for pod "client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6" in namespace "containers-6688" to be "success or failure" Apr 6 22:04:11.580: INFO: Pod "client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.789147ms Apr 6 22:04:13.584: INFO: Pod "client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006582299s Apr 6 22:04:15.587: INFO: Pod "client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010261049s STEP: Saw pod success Apr 6 22:04:15.588: INFO: Pod "client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6" satisfied condition "success or failure" Apr 6 22:04:15.590: INFO: Trying to get logs from node jerma-worker pod client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6 container test-container: STEP: delete the pod Apr 6 22:04:15.623: INFO: Waiting for pod client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6 to disappear Apr 6 22:04:15.637: INFO: Pod client-containers-39d09cc6-e422-43cf-938d-36b3faee39f6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:04:15.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6688" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3787,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:04:15.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 6 22:04:20.239: INFO: Successfully updated pod "labelsupdate3dacd3bc-791e-4810-ae03-38c26ccd23ed" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:04:22.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7676" for this suite. • [SLOW TEST:6.642 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:04:22.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 6 22:04:22.383: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:04:29.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3117" for this suite. • [SLOW TEST:7.371 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":232,"skipped":3826,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:04:29.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:04:29.742: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 6 22:04:34.747: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 6 22:04:34.747: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 6 22:04:34.814: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6482 /apis/apps/v1/namespaces/deployment-6482/deployments/test-cleanup-deployment 829635cb-af1a-4d26-a965-e9451cc5fad1 5990529 1 2020-04-06 22:04:34 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00503cc38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 6 22:04:34.854: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-6482 /apis/apps/v1/namespaces/deployment-6482/replicasets/test-cleanup-deployment-55ffc6b7b6 0be519c9-bcc5-4127-8433-a564ea04f76c 5990536 1 2020-04-06 22:04:34 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 829635cb-af1a-4d26-a965-e9451cc5fad1 0xc00503d2c7 0xc00503d2c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00503d4a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 6 22:04:34.854: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 6 22:04:34.854: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6482 /apis/apps/v1/namespaces/deployment-6482/replicasets/test-cleanup-controller 60db371e-0681-4a77-9ee6-ad0a05b18601 5990531 1 2020-04-06 22:04:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 829635cb-af1a-4d26-a965-e9451cc5fad1 0xc00503d1bf 0xc00503d1d0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00503d238 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 6 22:04:34.915: INFO: Pod "test-cleanup-controller-wwkbz" is available: &Pod{ObjectMeta:{test-cleanup-controller-wwkbz test-cleanup-controller- deployment-6482 /api/v1/namespaces/deployment-6482/pods/test-cleanup-controller-wwkbz 14f33916-d277-45a5-b05f-7e89c25eb6b1 5990513 0 2020-04-06 22:04:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 60db371e-0681-4a77-9ee6-ad0a05b18601 0xc0048511e7 0xc0048511e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wt7x5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wt7x5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wt7x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.113,StartTime:2020-04-06 22:04:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-06 22:04:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4817a7c021b77def394585f4b30a8064b70a0e9a31a2efbde52d35476aece981,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 6 22:04:34.915: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-28cpk" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-28cpk test-cleanup-deployment-55ffc6b7b6- deployment-6482 /api/v1/namespaces/deployment-6482/pods/test-cleanup-deployment-55ffc6b7b6-28cpk ed0decd9-ae1d-4b5a-b626-a7cee4b7a387 5990538 0 2020-04-06 22:04:34 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 0be519c9-bcc5-4127-8433-a564ea04f76c 0xc004851377 0xc004851378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wt7x5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wt7x5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wt7x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-06 22:04:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:04:34.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6482" for this suite. • [SLOW TEST:5.273 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":233,"skipped":3829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:04:34.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:04:35.043: INFO: Create a RollingUpdate DaemonSet Apr 6 22:04:35.047: INFO: Check that daemon pods launch on every node of the cluster Apr 6 22:04:35.097: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:04:35.172: INFO: Number of nodes with available pods: 0 Apr 6 22:04:35.172: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:04:36.178: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:04:36.182: INFO: Number of nodes with available pods: 0 Apr 6 22:04:36.182: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:04:37.177: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:04:37.180: INFO: Number of nodes with available pods: 0 Apr 6 22:04:37.180: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:04:38.177: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:04:38.179: INFO: Number of nodes with available pods: 0 Apr 6 22:04:38.179: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:04:39.185: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:04:39.228: INFO: Number of nodes with available pods: 2 Apr 6 22:04:39.228: INFO: Number of running nodes: 2, number of available pods: 2 Apr 6 22:04:39.228: INFO: Update the DaemonSet to trigger a rollout Apr 6 22:04:39.369: INFO: Updating DaemonSet daemon-set Apr 6 22:04:50.385: INFO: Roll back the DaemonSet before rollout is complete Apr 6 22:04:50.391: INFO: Updating DaemonSet daemon-set Apr 6 22:04:50.391: INFO: Make sure DaemonSet rollback is complete Apr 6 22:04:50.402: INFO: Wrong image for pod: daemon-set-fj8xt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 6 22:04:50.402: INFO: Pod daemon-set-fj8xt is not available Apr 6 22:04:50.424: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:04:51.433: INFO: Wrong image for pod: daemon-set-fj8xt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 6 22:04:51.433: INFO: Pod daemon-set-fj8xt is not available Apr 6 22:04:51.438: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:04:52.429: INFO: Pod daemon-set-4hnrh is not available Apr 6 22:04:52.433: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2514, will wait for the garbage collector to delete the pods Apr 6 22:04:52.498: INFO: Deleting DaemonSet.extensions daemon-set took: 6.216182ms Apr 6 22:04:52.599: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.480389ms Apr 6 22:04:59.519: INFO: Number of nodes with available pods: 0 Apr 6 22:04:59.519: INFO: Number of running nodes: 0, number of available pods: 0 Apr 6 22:04:59.522: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2514/daemonsets","resourceVersion":"5990718"},"items":null} Apr 6 22:04:59.524: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2514/pods","resourceVersion":"5990718"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:04:59.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2514" for this suite. • [SLOW TEST:24.604 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":234,"skipped":3863,"failed":0} [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:04:59.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 22:04:59.601: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5" in namespace "downward-api-1616" to be "success or failure" Apr 6 22:04:59.604: INFO: Pod "downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.587262ms Apr 6 22:05:01.609: INFO: Pod "downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0080353s Apr 6 22:05:03.613: INFO: Pod "downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012353444s STEP: Saw pod success Apr 6 22:05:03.613: INFO: Pod "downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5" satisfied condition "success or failure" Apr 6 22:05:03.621: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5 container client-container: STEP: delete the pod Apr 6 22:05:03.651: INFO: Waiting for pod downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5 to disappear Apr 6 22:05:03.664: INFO: Pod downwardapi-volume-a4878fe7-25a8-445a-9a90-4ecda188a3f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:03.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1616" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:03.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3799/configmap-test-0052341a-d1ae-435e-ba9b-8798b0eb2dc5 STEP: Creating a pod to test consume configMaps Apr 6 22:05:03.756: INFO: Waiting up to 5m0s for pod "pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd" in namespace "configmap-3799" to be "success or failure" Apr 6 22:05:03.760: INFO: Pod "pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.909541ms Apr 6 22:05:05.843: INFO: Pod "pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087749346s Apr 6 22:05:07.859: INFO: Pod "pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102974375s STEP: Saw pod success Apr 6 22:05:07.859: INFO: Pod "pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd" satisfied condition "success or failure" Apr 6 22:05:07.861: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd container env-test: STEP: delete the pod Apr 6 22:05:07.919: INFO: Waiting for pod pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd to disappear Apr 6 22:05:07.929: INFO: Pod pod-configmaps-487dc584-c961-4dfc-8d16-cf84350a75dd no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:07.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3799" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3890,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:07.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-97a472ed-12b3-4fa5-afcd-b62fab122d2b STEP: Creating a pod to test consume configMaps Apr 6 22:05:08.221: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9" in namespace "projected-5281" to be "success or failure" Apr 6 22:05:08.228: INFO: Pod "pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.444718ms Apr 6 22:05:10.233: INFO: Pod "pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011512177s Apr 6 22:05:12.237: INFO: Pod "pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015601745s STEP: Saw pod success Apr 6 22:05:12.237: INFO: Pod "pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9" satisfied condition "success or failure" Apr 6 22:05:12.240: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9 container projected-configmap-volume-test: STEP: delete the pod Apr 6 22:05:12.260: INFO: Waiting for pod pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9 to disappear Apr 6 22:05:12.264: INFO: Pod pod-projected-configmaps-ab394fbc-6467-436a-b312-74fd57df21c9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:12.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5281" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3910,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:12.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9091 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9091 to expose endpoints map[] Apr 6 22:05:12.474: INFO: successfully validated that service multi-endpoint-test in namespace services-9091 exposes endpoints map[] (19.389412ms elapsed) STEP: Creating pod pod1 in namespace services-9091 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9091 to expose endpoints map[pod1:[100]] Apr 6 22:05:16.583: INFO: successfully validated that service multi-endpoint-test in namespace services-9091 exposes endpoints map[pod1:[100]] (4.102912243s elapsed) STEP: Creating pod pod2 in namespace services-9091 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9091 to expose endpoints map[pod1:[100] pod2:[101]] Apr 6 22:05:20.734: INFO: successfully validated that service multi-endpoint-test in namespace services-9091 exposes endpoints map[pod1:[100] pod2:[101]] (4.148376682s elapsed) STEP: Deleting pod pod1 in namespace services-9091 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9091 to expose endpoints map[pod2:[101]] Apr 6 22:05:21.801: INFO: successfully validated that service multi-endpoint-test in namespace services-9091 exposes endpoints map[pod2:[101]] (1.062307485s elapsed) STEP: Deleting pod pod2 in namespace services-9091 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9091 to expose endpoints map[] Apr 6 22:05:22.824: INFO: successfully validated that service multi-endpoint-test in namespace services-9091 exposes endpoints map[] (1.018677093s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:22.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9091" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.669 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":238,"skipped":3921,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:22.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 6 22:05:23.003: INFO: Created pod &Pod{ObjectMeta:{dns-4331 dns-4331 /api/v1/namespaces/dns-4331/pods/dns-4331 5188c17b-aab1-444f-937a-7a6eabfc82b7 5990931 0 2020-04-06 22:05:23 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p56g2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p56g2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p56g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 6 22:05:27.010: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4331 PodName:dns-4331 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:27.010: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:27.049910 6 log.go:172] (0xc002916c60) (0xc001e53900) Create stream I0406 22:05:27.049948 6 log.go:172] (0xc002916c60) (0xc001e53900) Stream added, broadcasting: 1 I0406 22:05:27.052207 6 log.go:172] (0xc002916c60) Reply frame received for 1 I0406 22:05:27.052261 6 log.go:172] (0xc002916c60) (0xc002298500) Create stream I0406 22:05:27.052286 6 log.go:172] (0xc002916c60) (0xc002298500) Stream added, broadcasting: 3 I0406 22:05:27.053658 6 log.go:172] (0xc002916c60) Reply frame received for 3 I0406 22:05:27.053695 6 log.go:172] (0xc002916c60) (0xc0022988c0) Create stream I0406 22:05:27.053710 6 log.go:172] (0xc002916c60) (0xc0022988c0) Stream added, broadcasting: 5 I0406 22:05:27.054785 6 log.go:172] (0xc002916c60) Reply frame received for 5 I0406 22:05:27.147492 6 log.go:172] (0xc002916c60) Data frame received for 3 I0406 22:05:27.147522 6 log.go:172] (0xc002298500) (3) Data frame handling I0406 22:05:27.147546 6 log.go:172] (0xc002298500) (3) Data frame sent I0406 22:05:27.148321 6 log.go:172] (0xc002916c60) Data frame received for 5 I0406 22:05:27.148382 6 log.go:172] (0xc0022988c0) (5) Data frame handling I0406 22:05:27.148442 6 log.go:172] (0xc002916c60) Data frame received for 3 I0406 22:05:27.148479 6 log.go:172] (0xc002298500) (3) Data frame handling I0406 22:05:27.150505 6 log.go:172] (0xc002916c60) Data frame received for 1 I0406 22:05:27.150538 6 log.go:172] (0xc001e53900) (1) Data frame handling I0406 22:05:27.150558 6 log.go:172] (0xc001e53900) (1) Data frame sent I0406 22:05:27.150577 6 log.go:172] (0xc002916c60) (0xc001e53900) Stream removed, broadcasting: 1 I0406 22:05:27.150632 6 log.go:172] (0xc002916c60) Go away received I0406 22:05:27.150732 6 log.go:172] (0xc002916c60) (0xc001e53900) Stream removed, broadcasting: 1 I0406 22:05:27.150755 6 log.go:172] (0xc002916c60) (0xc002298500) Stream removed, broadcasting: 3 I0406 22:05:27.150765 6 log.go:172] (0xc002916c60) (0xc0022988c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 6 22:05:27.150: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4331 PodName:dns-4331 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:27.150: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:27.181260 6 log.go:172] (0xc002917290) (0xc001e53ae0) Create stream I0406 22:05:27.181296 6 log.go:172] (0xc002917290) (0xc001e53ae0) Stream added, broadcasting: 1 I0406 22:05:27.183337 6 log.go:172] (0xc002917290) Reply frame received for 1 I0406 22:05:27.183404 6 log.go:172] (0xc002917290) (0xc001f0c140) Create stream I0406 22:05:27.183434 6 log.go:172] (0xc002917290) (0xc001f0c140) Stream added, broadcasting: 3 I0406 22:05:27.184439 6 log.go:172] (0xc002917290) Reply frame received for 3 I0406 22:05:27.184492 6 log.go:172] (0xc002917290) (0xc001e53c20) Create stream I0406 22:05:27.184509 6 log.go:172] (0xc002917290) (0xc001e53c20) Stream added, broadcasting: 5 I0406 22:05:27.185558 6 log.go:172] (0xc002917290) Reply frame received for 5 I0406 22:05:27.266417 6 log.go:172] (0xc002917290) Data frame received for 3 I0406 22:05:27.266439 6 log.go:172] (0xc001f0c140) (3) Data frame handling I0406 22:05:27.266452 6 log.go:172] (0xc001f0c140) (3) Data frame sent I0406 22:05:27.267590 6 log.go:172] (0xc002917290) Data frame received for 5 I0406 22:05:27.267628 6 log.go:172] (0xc001e53c20) (5) Data frame handling I0406 22:05:27.267646 6 log.go:172] (0xc002917290) Data frame received for 3 I0406 22:05:27.267654 6 log.go:172] (0xc001f0c140) (3) Data frame handling I0406 22:05:27.268875 6 log.go:172] (0xc002917290) Data frame received for 1 I0406 22:05:27.268889 6 log.go:172] (0xc001e53ae0) (1) Data frame handling I0406 22:05:27.268897 6 log.go:172] (0xc001e53ae0) (1) Data frame sent I0406 22:05:27.268908 6 log.go:172] (0xc002917290) (0xc001e53ae0) Stream removed, broadcasting: 1 I0406 22:05:27.268925 6 log.go:172] (0xc002917290) Go away received I0406 22:05:27.269029 6 log.go:172] (0xc002917290) (0xc001e53ae0) Stream removed, broadcasting: 1 I0406 22:05:27.269049 6 log.go:172] (0xc002917290) (0xc001f0c140) Stream removed, broadcasting: 3 I0406 22:05:27.269064 6 log.go:172] (0xc002917290) (0xc001e53c20) Stream removed, broadcasting: 5 Apr 6 22:05:27.269: INFO: Deleting pod dns-4331... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:27.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4331" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":239,"skipped":3928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:27.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-a827d7e6-323e-4a55-a9c9-ba01cc7279ba STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:31.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5948" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4017,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:31.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:35.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9823" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4019,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:35.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:05:35.845: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:37.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-71" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":242,"skipped":4026,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:37.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 6 22:05:47.404: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:47.404: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:47.434038 6 log.go:172] (0xc002916bb0) (0xc0013814a0) Create stream I0406 22:05:47.434061 6 log.go:172] (0xc002916bb0) (0xc0013814a0) Stream added, broadcasting: 1 I0406 22:05:47.435754 6 log.go:172] (0xc002916bb0) Reply frame received for 1 I0406 22:05:47.435786 6 log.go:172] (0xc002916bb0) (0xc0013817c0) Create stream I0406 22:05:47.435794 6 log.go:172] (0xc002916bb0) (0xc0013817c0) Stream added, broadcasting: 3 I0406 22:05:47.437060 6 log.go:172] (0xc002916bb0) Reply frame received for 3 I0406 22:05:47.437284 6 log.go:172] (0xc002916bb0) (0xc00141a3c0) Create stream I0406 22:05:47.437319 6 log.go:172] (0xc002916bb0) (0xc00141a3c0) Stream added, broadcasting: 5 I0406 22:05:47.438499 6 log.go:172] (0xc002916bb0) Reply frame received for 5 I0406 22:05:47.506516 6 log.go:172] (0xc002916bb0) Data frame received for 5 I0406 22:05:47.506558 6 log.go:172] (0xc00141a3c0) (5) Data frame handling I0406 22:05:47.506588 6 log.go:172] (0xc002916bb0) Data frame received for 3 I0406 22:05:47.506605 6 log.go:172] (0xc0013817c0) (3) Data frame handling I0406 22:05:47.506622 6 log.go:172] (0xc0013817c0) (3) Data frame sent I0406 22:05:47.506636 6 log.go:172] (0xc002916bb0) Data frame received for 3 I0406 22:05:47.506647 6 log.go:172] (0xc0013817c0) (3) Data frame handling I0406 22:05:47.508020 6 log.go:172] (0xc002916bb0) Data frame received for 1 I0406 22:05:47.508059 6 log.go:172] (0xc0013814a0) (1) Data frame handling I0406 22:05:47.508104 6 log.go:172] (0xc0013814a0) (1) Data frame sent I0406 22:05:47.508186 6 log.go:172] (0xc002916bb0) (0xc0013814a0) Stream removed, broadcasting: 1 I0406 22:05:47.508236 6 log.go:172] (0xc002916bb0) Go away received I0406 22:05:47.508339 6 log.go:172] (0xc002916bb0) (0xc0013814a0) Stream removed, broadcasting: 1 I0406 22:05:47.508365 6 log.go:172] (0xc002916bb0) (0xc0013817c0) Stream removed, broadcasting: 3 I0406 22:05:47.508384 6 log.go:172] (0xc002916bb0) (0xc00141a3c0) Stream removed, broadcasting: 5 Apr 6 22:05:47.508: INFO: Exec stderr: "" Apr 6 22:05:47.508: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:47.508: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:47.539223 6 log.go:172] (0xc0029b8c60) (0xc00141a960) Create stream I0406 22:05:47.539244 6 log.go:172] (0xc0029b8c60) (0xc00141a960) Stream added, broadcasting: 1 I0406 22:05:47.540798 6 log.go:172] (0xc0029b8c60) Reply frame received for 1 I0406 22:05:47.540827 6 log.go:172] (0xc0029b8c60) (0xc00292e0a0) Create stream I0406 22:05:47.540839 6 log.go:172] (0xc0029b8c60) (0xc00292e0a0) Stream added, broadcasting: 3 I0406 22:05:47.541892 6 log.go:172] (0xc0029b8c60) Reply frame received for 3 I0406 22:05:47.541929 6 log.go:172] (0xc0029b8c60) (0xc001789c20) Create stream I0406 22:05:47.541942 6 log.go:172] (0xc0029b8c60) (0xc001789c20) Stream added, broadcasting: 5 I0406 22:05:47.542775 6 log.go:172] (0xc0029b8c60) Reply frame received for 5 I0406 22:05:47.601444 6 log.go:172] (0xc0029b8c60) Data frame received for 3 I0406 22:05:47.601476 6 log.go:172] (0xc00292e0a0) (3) Data frame handling I0406 22:05:47.601491 6 log.go:172] (0xc00292e0a0) (3) Data frame sent I0406 22:05:47.601507 6 log.go:172] (0xc0029b8c60) Data frame received for 3 I0406 22:05:47.601523 6 log.go:172] (0xc00292e0a0) (3) Data frame handling I0406 22:05:47.601560 6 log.go:172] (0xc0029b8c60) Data frame received for 5 I0406 22:05:47.601584 6 log.go:172] (0xc001789c20) (5) Data frame handling I0406 22:05:47.602964 6 log.go:172] (0xc0029b8c60) Data frame received for 1 I0406 22:05:47.602991 6 log.go:172] (0xc00141a960) (1) Data frame handling I0406 22:05:47.603005 6 log.go:172] (0xc00141a960) (1) Data frame sent I0406 22:05:47.603024 6 log.go:172] (0xc0029b8c60) (0xc00141a960) Stream removed, broadcasting: 1 I0406 22:05:47.603048 6 log.go:172] (0xc0029b8c60) Go away received I0406 22:05:47.603229 6 log.go:172] (0xc0029b8c60) (0xc00141a960) Stream removed, broadcasting: 1 I0406 22:05:47.603253 6 log.go:172] (0xc0029b8c60) (0xc00292e0a0) Stream removed, broadcasting: 3 I0406 22:05:47.603273 6 log.go:172] (0xc0029b8c60) (0xc001789c20) Stream removed, broadcasting: 5 Apr 6 22:05:47.603: INFO: Exec stderr: "" Apr 6 22:05:47.603: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:47.603: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:47.637682 6 log.go:172] (0xc0029b9550) (0xc00141abe0) Create stream I0406 22:05:47.637712 6 log.go:172] (0xc0029b9550) (0xc00141abe0) Stream added, broadcasting: 1 I0406 22:05:47.640363 6 log.go:172] (0xc0029b9550) Reply frame received for 1 I0406 22:05:47.640406 6 log.go:172] (0xc0029b9550) (0xc00141ae60) Create stream I0406 22:05:47.640427 6 log.go:172] (0xc0029b9550) (0xc00141ae60) Stream added, broadcasting: 3 I0406 22:05:47.641650 6 log.go:172] (0xc0029b9550) Reply frame received for 3 I0406 22:05:47.641703 6 log.go:172] (0xc0029b9550) (0xc001789cc0) Create stream I0406 22:05:47.641722 6 log.go:172] (0xc0029b9550) (0xc001789cc0) Stream added, broadcasting: 5 I0406 22:05:47.642638 6 log.go:172] (0xc0029b9550) Reply frame received for 5 I0406 22:05:47.714744 6 log.go:172] (0xc0029b9550) Data frame received for 5 I0406 22:05:47.714779 6 log.go:172] (0xc001789cc0) (5) Data frame handling I0406 22:05:47.714805 6 log.go:172] (0xc0029b9550) Data frame received for 3 I0406 22:05:47.714815 6 log.go:172] (0xc00141ae60) (3) Data frame handling I0406 22:05:47.714830 6 log.go:172] (0xc00141ae60) (3) Data frame sent I0406 22:05:47.714839 6 log.go:172] (0xc0029b9550) Data frame received for 3 I0406 22:05:47.714850 6 log.go:172] (0xc00141ae60) (3) Data frame handling I0406 22:05:47.716420 6 log.go:172] (0xc0029b9550) Data frame received for 1 I0406 22:05:47.716529 6 log.go:172] (0xc00141abe0) (1) Data frame handling I0406 22:05:47.716577 6 log.go:172] (0xc00141abe0) (1) Data frame sent I0406 22:05:47.716597 6 log.go:172] (0xc0029b9550) (0xc00141abe0) Stream removed, broadcasting: 1 I0406 22:05:47.716613 6 log.go:172] (0xc0029b9550) Go away received I0406 22:05:47.716721 6 log.go:172] (0xc0029b9550) (0xc00141abe0) Stream removed, broadcasting: 1 I0406 22:05:47.716743 6 log.go:172] (0xc0029b9550) (0xc00141ae60) Stream removed, broadcasting: 3 I0406 22:05:47.716751 6 log.go:172] (0xc0029b9550) (0xc001789cc0) Stream removed, broadcasting: 5 Apr 6 22:05:47.716: INFO: Exec stderr: "" Apr 6 22:05:47.716: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:47.716: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:47.747041 6 log.go:172] (0xc0008e4420) (0xc00292e460) Create stream I0406 22:05:47.747075 6 log.go:172] (0xc0008e4420) (0xc00292e460) Stream added, broadcasting: 1 I0406 22:05:47.749736 6 log.go:172] (0xc0008e4420) Reply frame received for 1 I0406 22:05:47.749782 6 log.go:172] (0xc0008e4420) (0xc001a040a0) Create stream I0406 22:05:47.749798 6 log.go:172] (0xc0008e4420) (0xc001a040a0) Stream added, broadcasting: 3 I0406 22:05:47.750873 6 log.go:172] (0xc0008e4420) Reply frame received for 3 I0406 22:05:47.750919 6 log.go:172] (0xc0008e4420) (0xc001a041e0) Create stream I0406 22:05:47.750935 6 log.go:172] (0xc0008e4420) (0xc001a041e0) Stream added, broadcasting: 5 I0406 22:05:47.751895 6 log.go:172] (0xc0008e4420) Reply frame received for 5 I0406 22:05:47.829482 6 log.go:172] (0xc0008e4420) Data frame received for 5 I0406 22:05:47.829527 6 log.go:172] (0xc001a041e0) (5) Data frame handling I0406 22:05:47.829567 6 log.go:172] (0xc0008e4420) Data frame received for 3 I0406 22:05:47.829592 6 log.go:172] (0xc001a040a0) (3) Data frame handling I0406 22:05:47.829605 6 log.go:172] (0xc001a040a0) (3) Data frame sent I0406 22:05:47.829625 6 log.go:172] (0xc0008e4420) Data frame received for 3 I0406 22:05:47.829636 6 log.go:172] (0xc001a040a0) (3) Data frame handling I0406 22:05:47.831284 6 log.go:172] (0xc0008e4420) Data frame received for 1 I0406 22:05:47.831330 6 log.go:172] (0xc00292e460) (1) Data frame handling I0406 22:05:47.831350 6 log.go:172] (0xc00292e460) (1) Data frame sent I0406 22:05:47.831384 6 log.go:172] (0xc0008e4420) (0xc00292e460) Stream removed, broadcasting: 1 I0406 22:05:47.831414 6 log.go:172] (0xc0008e4420) Go away received I0406 22:05:47.831513 6 log.go:172] (0xc0008e4420) (0xc00292e460) Stream removed, broadcasting: 1 I0406 22:05:47.831551 6 log.go:172] (0xc0008e4420) (0xc001a040a0) Stream removed, broadcasting: 3 I0406 22:05:47.831568 6 log.go:172] (0xc0008e4420) (0xc001a041e0) Stream removed, broadcasting: 5 Apr 6 22:05:47.831: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 6 22:05:47.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:47.831: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:47.866619 6 log.go:172] (0xc0027cd3f0) (0xc001a04640) Create stream I0406 22:05:47.866651 6 log.go:172] (0xc0027cd3f0) (0xc001a04640) Stream added, broadcasting: 1 I0406 22:05:47.868623 6 log.go:172] (0xc0027cd3f0) Reply frame received for 1 I0406 22:05:47.868682 6 log.go:172] (0xc0027cd3f0) (0xc001789ea0) Create stream I0406 22:05:47.868698 6 log.go:172] (0xc0027cd3f0) (0xc001789ea0) Stream added, broadcasting: 3 I0406 22:05:47.869731 6 log.go:172] (0xc0027cd3f0) Reply frame received for 3 I0406 22:05:47.869767 6 log.go:172] (0xc0027cd3f0) (0xc00141af00) Create stream I0406 22:05:47.869784 6 log.go:172] (0xc0027cd3f0) (0xc00141af00) Stream added, broadcasting: 5 I0406 22:05:47.870596 6 log.go:172] (0xc0027cd3f0) Reply frame received for 5 I0406 22:05:47.916641 6 log.go:172] (0xc0027cd3f0) Data frame received for 3 I0406 22:05:47.916667 6 log.go:172] (0xc001789ea0) (3) Data frame handling I0406 22:05:47.916696 6 log.go:172] (0xc001789ea0) (3) Data frame sent I0406 22:05:47.916716 6 log.go:172] (0xc0027cd3f0) Data frame received for 3 I0406 22:05:47.916724 6 log.go:172] (0xc001789ea0) (3) Data frame handling I0406 22:05:47.917093 6 log.go:172] (0xc0027cd3f0) Data frame received for 5 I0406 22:05:47.917108 6 log.go:172] (0xc00141af00) (5) Data frame handling I0406 22:05:47.918901 6 log.go:172] (0xc0027cd3f0) Data frame received for 1 I0406 22:05:47.918916 6 log.go:172] (0xc001a04640) (1) Data frame handling I0406 22:05:47.918924 6 log.go:172] (0xc001a04640) (1) Data frame sent I0406 22:05:47.918932 6 log.go:172] (0xc0027cd3f0) (0xc001a04640) Stream removed, broadcasting: 1 I0406 22:05:47.918973 6 log.go:172] (0xc0027cd3f0) Go away received I0406 22:05:47.919032 6 log.go:172] (0xc0027cd3f0) (0xc001a04640) Stream removed, broadcasting: 1 I0406 22:05:47.919046 6 log.go:172] (0xc0027cd3f0) (0xc001789ea0) Stream removed, broadcasting: 3 I0406 22:05:47.919056 6 log.go:172] (0xc0027cd3f0) (0xc00141af00) Stream removed, broadcasting: 5 Apr 6 22:05:47.919: INFO: Exec stderr: "" Apr 6 22:05:47.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:47.919: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:47.943715 6 log.go:172] (0xc0027cdad0) (0xc001a048c0) Create stream I0406 22:05:47.943747 6 log.go:172] (0xc0027cdad0) (0xc001a048c0) Stream added, broadcasting: 1 I0406 22:05:47.945889 6 log.go:172] (0xc0027cdad0) Reply frame received for 1 I0406 22:05:47.945917 6 log.go:172] (0xc0027cdad0) (0xc00141b220) Create stream I0406 22:05:47.945924 6 log.go:172] (0xc0027cdad0) (0xc00141b220) Stream added, broadcasting: 3 I0406 22:05:47.946682 6 log.go:172] (0xc0027cdad0) Reply frame received for 3 I0406 22:05:47.946707 6 log.go:172] (0xc0027cdad0) (0xc001381900) Create stream I0406 22:05:47.946714 6 log.go:172] (0xc0027cdad0) (0xc001381900) Stream added, broadcasting: 5 I0406 22:05:47.947518 6 log.go:172] (0xc0027cdad0) Reply frame received for 5 I0406 22:05:48.003409 6 log.go:172] (0xc0027cdad0) Data frame received for 3 I0406 22:05:48.003438 6 log.go:172] (0xc00141b220) (3) Data frame handling I0406 22:05:48.003456 6 log.go:172] (0xc00141b220) (3) Data frame sent I0406 22:05:48.003467 6 log.go:172] (0xc0027cdad0) Data frame received for 3 I0406 22:05:48.003473 6 log.go:172] (0xc00141b220) (3) Data frame handling I0406 22:05:48.003766 6 log.go:172] (0xc0027cdad0) Data frame received for 5 I0406 22:05:48.003796 6 log.go:172] (0xc001381900) (5) Data frame handling I0406 22:05:48.005314 6 log.go:172] (0xc0027cdad0) Data frame received for 1 I0406 22:05:48.005345 6 log.go:172] (0xc001a048c0) (1) Data frame handling I0406 22:05:48.005369 6 log.go:172] (0xc001a048c0) (1) Data frame sent I0406 22:05:48.005393 6 log.go:172] (0xc0027cdad0) (0xc001a048c0) Stream removed, broadcasting: 1 I0406 22:05:48.005490 6 log.go:172] (0xc0027cdad0) Go away received I0406 22:05:48.005581 6 log.go:172] (0xc0027cdad0) (0xc001a048c0) Stream removed, broadcasting: 1 I0406 22:05:48.005598 6 log.go:172] (0xc0027cdad0) (0xc00141b220) Stream removed, broadcasting: 3 I0406 22:05:48.005610 6 log.go:172] (0xc0027cdad0) (0xc001381900) Stream removed, broadcasting: 5 Apr 6 22:05:48.005: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 6 22:05:48.005: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:48.005: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:48.037081 6 log.go:172] (0xc0029248f0) (0xc002816280) Create stream I0406 22:05:48.037242 6 log.go:172] (0xc0029248f0) (0xc002816280) Stream added, broadcasting: 1 I0406 22:05:48.039190 6 log.go:172] (0xc0029248f0) Reply frame received for 1 I0406 22:05:48.039242 6 log.go:172] (0xc0029248f0) (0xc0013819a0) Create stream I0406 22:05:48.039259 6 log.go:172] (0xc0029248f0) (0xc0013819a0) Stream added, broadcasting: 3 I0406 22:05:48.040417 6 log.go:172] (0xc0029248f0) Reply frame received for 3 I0406 22:05:48.040436 6 log.go:172] (0xc0029248f0) (0xc00292e5a0) Create stream I0406 22:05:48.040443 6 log.go:172] (0xc0029248f0) (0xc00292e5a0) Stream added, broadcasting: 5 I0406 22:05:48.041678 6 log.go:172] (0xc0029248f0) Reply frame received for 5 I0406 22:05:48.092466 6 log.go:172] (0xc0029248f0) Data frame received for 5 I0406 22:05:48.092513 6 log.go:172] (0xc0029248f0) Data frame received for 3 I0406 22:05:48.092570 6 log.go:172] (0xc0013819a0) (3) Data frame handling I0406 22:05:48.092597 6 log.go:172] (0xc00292e5a0) (5) Data frame handling I0406 22:05:48.092652 6 log.go:172] (0xc0013819a0) (3) Data frame sent I0406 22:05:48.092685 6 log.go:172] (0xc0029248f0) Data frame received for 3 I0406 22:05:48.092701 6 log.go:172] (0xc0013819a0) (3) Data frame handling I0406 22:05:48.094279 6 log.go:172] (0xc0029248f0) Data frame received for 1 I0406 22:05:48.094320 6 log.go:172] (0xc002816280) (1) Data frame handling I0406 22:05:48.094357 6 log.go:172] (0xc002816280) (1) Data frame sent I0406 22:05:48.094382 6 log.go:172] (0xc0029248f0) (0xc002816280) Stream removed, broadcasting: 1 I0406 22:05:48.094399 6 log.go:172] (0xc0029248f0) Go away received I0406 22:05:48.094533 6 log.go:172] (0xc0029248f0) (0xc002816280) Stream removed, broadcasting: 1 I0406 22:05:48.094561 6 log.go:172] (0xc0029248f0) (0xc0013819a0) Stream removed, broadcasting: 3 I0406 22:05:48.094583 6 log.go:172] (0xc0029248f0) (0xc00292e5a0) Stream removed, broadcasting: 5 Apr 6 22:05:48.094: INFO: Exec stderr: "" Apr 6 22:05:48.094: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:48.094: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:48.150334 6 log.go:172] (0xc0029171e0) (0xc001381d60) Create stream I0406 22:05:48.150362 6 log.go:172] (0xc0029171e0) (0xc001381d60) Stream added, broadcasting: 1 I0406 22:05:48.152107 6 log.go:172] (0xc0029171e0) Reply frame received for 1 I0406 22:05:48.152146 6 log.go:172] (0xc0029171e0) (0xc001381ea0) Create stream I0406 22:05:48.152163 6 log.go:172] (0xc0029171e0) (0xc001381ea0) Stream added, broadcasting: 3 I0406 22:05:48.152891 6 log.go:172] (0xc0029171e0) Reply frame received for 3 I0406 22:05:48.152929 6 log.go:172] (0xc0029171e0) (0xc00122e000) Create stream I0406 22:05:48.152943 6 log.go:172] (0xc0029171e0) (0xc00122e000) Stream added, broadcasting: 5 I0406 22:05:48.154124 6 log.go:172] (0xc0029171e0) Reply frame received for 5 I0406 22:05:48.199630 6 log.go:172] (0xc0029171e0) Data frame received for 3 I0406 22:05:48.199675 6 log.go:172] (0xc001381ea0) (3) Data frame handling I0406 22:05:48.199696 6 log.go:172] (0xc001381ea0) (3) Data frame sent I0406 22:05:48.200564 6 log.go:172] (0xc0029171e0) Data frame received for 5 I0406 22:05:48.200592 6 log.go:172] (0xc00122e000) (5) Data frame handling I0406 22:05:48.200628 6 log.go:172] (0xc0029171e0) Data frame received for 3 I0406 22:05:48.200655 6 log.go:172] (0xc001381ea0) (3) Data frame handling I0406 22:05:48.211654 6 log.go:172] (0xc0029171e0) Data frame received for 1 I0406 22:05:48.211681 6 log.go:172] (0xc001381d60) (1) Data frame handling I0406 22:05:48.211693 6 log.go:172] (0xc001381d60) (1) Data frame sent I0406 22:05:48.211714 6 log.go:172] (0xc0029171e0) (0xc001381d60) Stream removed, broadcasting: 1 I0406 22:05:48.211814 6 log.go:172] (0xc0029171e0) (0xc001381d60) Stream removed, broadcasting: 1 I0406 22:05:48.211831 6 log.go:172] (0xc0029171e0) (0xc001381ea0) Stream removed, broadcasting: 3 I0406 22:05:48.211842 6 log.go:172] (0xc0029171e0) (0xc00122e000) Stream removed, broadcasting: 5 Apr 6 22:05:48.211: INFO: Exec stderr: "" Apr 6 22:05:48.211: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 6 22:05:48.211: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:48.213549 6 log.go:172] (0xc0029171e0) Go away received I0406 22:05:48.234163 6 log.go:172] (0xc002917810) (0xc00122e320) Create stream I0406 22:05:48.234185 6 log.go:172] (0xc002917810) (0xc00122e320) Stream added, broadcasting: 1 I0406 22:05:48.236009 6 log.go:172] (0xc002917810) Reply frame received for 1 I0406 22:05:48.236034 6 log.go:172] (0xc002917810) (0xc00292eaa0) Create stream I0406 22:05:48.236044 6 log.go:172] (0xc002917810) (0xc00292eaa0) Stream added, broadcasting: 3 I0406 22:05:48.237016 6 log.go:172] (0xc002917810) Reply frame received for 3 I0406 22:05:48.237055 6 log.go:172] (0xc002917810) (0xc002816500) Create stream I0406 22:05:48.237069 6 log.go:172] (0xc002917810) (0xc002816500) Stream added, broadcasting: 5 I0406 22:05:48.238231 6 log.go:172] (0xc002917810) Reply frame received for 5 I0406 22:05:48.291482 6 log.go:172] (0xc002917810) Data frame received for 5 I0406 22:05:48.291529 6 log.go:172] (0xc002816500) (5) Data frame handling I0406 22:05:48.291586 6 log.go:172] (0xc002917810) Data frame received for 3 I0406 22:05:48.291626 6 log.go:172] (0xc00292eaa0) (3) Data frame handling I0406 22:05:48.291649 6 log.go:172] (0xc00292eaa0) (3) Data frame sent I0406 22:05:48.291665 6 log.go:172] (0xc002917810) Data frame received for 3 I0406 22:05:48.291678 6 log.go:172] (0xc00292eaa0) (3) Data frame handling I0406 22:05:48.293393 6 log.go:172] (0xc002917810) Data frame received for 1 I0406 22:05:48.293428 6 log.go:172] (0xc00122e320) (1) Data frame handling I0406 22:05:48.293455 6 log.go:172] (0xc00122e320) (1) Data frame sent I0406 22:05:48.293475 6 log.go:172] (0xc002917810) (0xc00122e320) Stream removed, broadcasting: 1 I0406 22:05:48.293555 6 log.go:172] (0xc002917810) (0xc00122e320) Stream removed, broadcasting: 1 I0406 22:05:48.293582 6 log.go:172] (0xc002917810) (0xc00292eaa0) Stream removed, broadcasting: 3 I0406 22:05:48.293623 6 log.go:172] (0xc002917810) (0xc002816500) Stream removed, broadcasting: 5 Apr 6 22:05:48.293: INFO: Exec stderr: "" Apr 6 22:05:48.293: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2246 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0406 22:05:48.293710 6 log.go:172] (0xc002917810) Go away received Apr 6 22:05:48.293: INFO: >>> kubeConfig: /root/.kube/config I0406 22:05:48.323252 6 log.go:172] (0xc0008e4a50) (0xc00292f400) Create stream I0406 22:05:48.323274 6 log.go:172] (0xc0008e4a50) (0xc00292f400) Stream added, broadcasting: 1 I0406 22:05:48.325517 6 log.go:172] (0xc0008e4a50) Reply frame received for 1 I0406 22:05:48.325558 6 log.go:172] (0xc0008e4a50) (0xc001a04a00) Create stream I0406 22:05:48.325573 6 log.go:172] (0xc0008e4a50) (0xc001a04a00) Stream added, broadcasting: 3 I0406 22:05:48.326524 6 log.go:172] (0xc0008e4a50) Reply frame received for 3 I0406 22:05:48.326577 6 log.go:172] (0xc0008e4a50) (0xc00141b2c0) Create stream I0406 22:05:48.326594 6 log.go:172] (0xc0008e4a50) (0xc00141b2c0) Stream added, broadcasting: 5 I0406 22:05:48.327428 6 log.go:172] (0xc0008e4a50) Reply frame received for 5 I0406 22:05:48.391535 6 log.go:172] (0xc0008e4a50) Data frame received for 3 I0406 22:05:48.391586 6 log.go:172] (0xc001a04a00) (3) Data frame handling I0406 22:05:48.391601 6 log.go:172] (0xc001a04a00) (3) Data frame sent I0406 22:05:48.391622 6 log.go:172] (0xc0008e4a50) Data frame received for 3 I0406 22:05:48.391641 6 log.go:172] (0xc001a04a00) (3) Data frame handling I0406 22:05:48.391689 6 log.go:172] (0xc0008e4a50) Data frame received for 5 I0406 22:05:48.391724 6 log.go:172] (0xc00141b2c0) (5) Data frame handling I0406 22:05:48.393421 6 log.go:172] (0xc0008e4a50) Data frame received for 1 I0406 22:05:48.393459 6 log.go:172] (0xc00292f400) (1) Data frame handling I0406 22:05:48.393506 6 log.go:172] (0xc00292f400) (1) Data frame sent I0406 22:05:48.393533 6 log.go:172] (0xc0008e4a50) (0xc00292f400) Stream removed, broadcasting: 1 I0406 22:05:48.393594 6 log.go:172] (0xc0008e4a50) Go away received I0406 22:05:48.393688 6 log.go:172] (0xc0008e4a50) (0xc00292f400) Stream removed, broadcasting: 1 I0406 22:05:48.393775 6 log.go:172] (0xc0008e4a50) (0xc001a04a00) Stream removed, broadcasting: 3 I0406 22:05:48.393845 6 log.go:172] (0xc0008e4a50) (0xc00141b2c0) Stream removed, broadcasting: 5 Apr 6 22:05:48.393: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:48.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2246" for this suite. • [SLOW TEST:11.197 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4041,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:48.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 6 22:05:53.037: INFO: Successfully updated pod "pod-update-05de353b-c2ba-4c55-9ac3-6fb1f58a3f31" STEP: verifying the updated pod is in kubernetes Apr 6 22:05:53.052: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:05:53.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1622" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:05:53.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-203 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-203;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-203 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-203;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-203.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-203.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-203.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-203.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-203.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.193.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.193.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.193.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.193.140_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-203 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-203;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-203 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-203;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-203.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-203.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-203.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-203.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-203.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-203.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-203.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-203.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.193.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.193.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.193.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.193.140_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 22:05:59.232: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.235: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.238: INFO: Unable to read wheezy_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.241: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.244: INFO: Unable to read wheezy_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.247: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.250: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.253: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.274: INFO: Unable to read jessie_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.277: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.280: INFO: Unable to read jessie_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.284: INFO: Unable to read jessie_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.287: INFO: Unable to read jessie_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.290: INFO: Unable to read jessie_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.293: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.296: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:05:59.312: INFO: Lookups using dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-203 wheezy_tcp@dns-test-service.dns-203 wheezy_udp@dns-test-service.dns-203.svc wheezy_tcp@dns-test-service.dns-203.svc wheezy_udp@_http._tcp.dns-test-service.dns-203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-203 jessie_tcp@dns-test-service.dns-203 jessie_udp@dns-test-service.dns-203.svc jessie_tcp@dns-test-service.dns-203.svc jessie_udp@_http._tcp.dns-test-service.dns-203.svc jessie_tcp@_http._tcp.dns-test-service.dns-203.svc] Apr 6 22:06:04.317: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.321: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.324: INFO: Unable to read wheezy_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.327: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.330: INFO: Unable to read wheezy_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.333: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.359: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.362: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.383: INFO: Unable to read jessie_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.385: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.388: INFO: Unable to read jessie_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.391: INFO: Unable to read jessie_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.394: INFO: Unable to read jessie_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.397: INFO: Unable to read jessie_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.400: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.402: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:04.418: INFO: Lookups using dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-203 wheezy_tcp@dns-test-service.dns-203 wheezy_udp@dns-test-service.dns-203.svc wheezy_tcp@dns-test-service.dns-203.svc wheezy_udp@_http._tcp.dns-test-service.dns-203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-203 jessie_tcp@dns-test-service.dns-203 jessie_udp@dns-test-service.dns-203.svc jessie_tcp@dns-test-service.dns-203.svc jessie_udp@_http._tcp.dns-test-service.dns-203.svc jessie_tcp@_http._tcp.dns-test-service.dns-203.svc] Apr 6 22:06:09.317: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.321: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.324: INFO: Unable to read wheezy_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.328: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.331: INFO: Unable to read wheezy_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.334: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.336: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.339: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.360: INFO: Unable to read jessie_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.363: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.366: INFO: Unable to read jessie_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.369: INFO: Unable to read jessie_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.372: INFO: Unable to read jessie_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.375: INFO: Unable to read jessie_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.378: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.382: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:09.402: INFO: Lookups using dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-203 wheezy_tcp@dns-test-service.dns-203 wheezy_udp@dns-test-service.dns-203.svc wheezy_tcp@dns-test-service.dns-203.svc wheezy_udp@_http._tcp.dns-test-service.dns-203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-203 jessie_tcp@dns-test-service.dns-203 jessie_udp@dns-test-service.dns-203.svc jessie_tcp@dns-test-service.dns-203.svc jessie_udp@_http._tcp.dns-test-service.dns-203.svc jessie_tcp@_http._tcp.dns-test-service.dns-203.svc] Apr 6 22:06:14.317: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.321: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.325: INFO: Unable to read wheezy_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.328: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.332: INFO: Unable to read wheezy_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.338: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.359: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.362: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.400: INFO: Unable to read jessie_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.403: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.407: INFO: Unable to read jessie_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.410: INFO: Unable to read jessie_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.412: INFO: Unable to read jessie_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.418: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.421: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:14.452: INFO: Lookups using dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-203 wheezy_tcp@dns-test-service.dns-203 wheezy_udp@dns-test-service.dns-203.svc wheezy_tcp@dns-test-service.dns-203.svc wheezy_udp@_http._tcp.dns-test-service.dns-203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-203 jessie_tcp@dns-test-service.dns-203 jessie_udp@dns-test-service.dns-203.svc jessie_tcp@dns-test-service.dns-203.svc jessie_udp@_http._tcp.dns-test-service.dns-203.svc jessie_tcp@_http._tcp.dns-test-service.dns-203.svc] Apr 6 22:06:19.326: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.330: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.334: INFO: Unable to read wheezy_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.337: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.342: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.345: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.347: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.393: INFO: Unable to read jessie_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.395: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.398: INFO: Unable to read jessie_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.400: INFO: Unable to read jessie_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.402: INFO: Unable to read jessie_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.404: INFO: Unable to read jessie_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.406: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.408: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:19.432: INFO: Lookups using dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-203 wheezy_tcp@dns-test-service.dns-203 wheezy_udp@dns-test-service.dns-203.svc wheezy_tcp@dns-test-service.dns-203.svc wheezy_udp@_http._tcp.dns-test-service.dns-203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-203 jessie_tcp@dns-test-service.dns-203 jessie_udp@dns-test-service.dns-203.svc jessie_tcp@dns-test-service.dns-203.svc jessie_udp@_http._tcp.dns-test-service.dns-203.svc jessie_tcp@_http._tcp.dns-test-service.dns-203.svc] Apr 6 22:06:24.317: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.320: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.324: INFO: Unable to read wheezy_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.326: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.330: INFO: Unable to read wheezy_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.333: INFO: Unable to read wheezy_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.335: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.338: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.355: INFO: Unable to read jessie_udp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.357: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.360: INFO: Unable to read jessie_udp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.362: INFO: Unable to read jessie_tcp@dns-test-service.dns-203 from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.365: INFO: Unable to read jessie_udp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.367: INFO: Unable to read jessie_tcp@dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.370: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.373: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-203.svc from pod dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee: the server could not find the requested resource (get pods dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee) Apr 6 22:06:24.388: INFO: Lookups using dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-203 wheezy_tcp@dns-test-service.dns-203 wheezy_udp@dns-test-service.dns-203.svc wheezy_tcp@dns-test-service.dns-203.svc wheezy_udp@_http._tcp.dns-test-service.dns-203.svc wheezy_tcp@_http._tcp.dns-test-service.dns-203.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-203 jessie_tcp@dns-test-service.dns-203 jessie_udp@dns-test-service.dns-203.svc jessie_tcp@dns-test-service.dns-203.svc jessie_udp@_http._tcp.dns-test-service.dns-203.svc jessie_tcp@_http._tcp.dns-test-service.dns-203.svc] Apr 6 22:06:29.535: INFO: DNS probes using dns-203/dns-test-2c41f0ec-f90b-4479-8b0e-ccb75329f1ee succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:06:30.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-203" for this suite. • [SLOW TEST:37.020 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":245,"skipped":4114,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:06:30.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 6 22:06:30.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 6 22:06:30.410: INFO: stderr: "" Apr 6 22:06:30.410: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:06:30.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9275" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":246,"skipped":4121,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:06:30.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 6 22:06:30.480: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 6 22:06:30.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8094' Apr 6 22:06:33.993: INFO: stderr: "" Apr 6 22:06:33.993: INFO: stdout: "service/agnhost-slave created\n" Apr 6 22:06:33.993: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 6 22:06:33.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8094' Apr 6 22:06:34.347: INFO: stderr: "" Apr 6 22:06:34.347: INFO: stdout: "service/agnhost-master created\n" Apr 6 22:06:34.347: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 6 22:06:34.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8094' Apr 6 22:06:34.606: INFO: stderr: "" Apr 6 22:06:34.606: INFO: stdout: "service/frontend created\n" Apr 6 22:06:34.607: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 6 22:06:34.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8094' Apr 6 22:06:34.845: INFO: stderr: "" Apr 6 22:06:34.845: INFO: stdout: "deployment.apps/frontend created\n" Apr 6 22:06:34.846: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 6 22:06:34.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8094' Apr 6 22:06:35.097: INFO: stderr: "" Apr 6 22:06:35.097: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 6 22:06:35.097: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 6 22:06:35.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8094' Apr 6 22:06:35.386: INFO: stderr: "" Apr 6 22:06:35.386: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 6 22:06:35.386: INFO: Waiting for all frontend pods to be Running. Apr 6 22:06:45.436: INFO: Waiting for frontend to serve content. Apr 6 22:06:45.448: INFO: Trying to add a new entry to the guestbook. Apr 6 22:06:45.458: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 6 22:06:45.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8094' Apr 6 22:06:45.645: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 22:06:45.645: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 6 22:06:45.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8094' Apr 6 22:06:45.813: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 22:06:45.813: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 6 22:06:45.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8094' Apr 6 22:06:45.920: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 22:06:45.920: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 6 22:06:45.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8094' Apr 6 22:06:46.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 22:06:46.028: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 6 22:06:46.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8094' Apr 6 22:06:46.158: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 22:06:46.158: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 6 22:06:46.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8094' Apr 6 22:06:46.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 6 22:06:46.274: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:06:46.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8094" for this suite. • [SLOW TEST:15.862 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":247,"skipped":4123,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:06:46.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:07:00.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7762" for this suite. • [SLOW TEST:14.142 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":248,"skipped":4132,"failed":0} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:07:00.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 6 22:07:05.234: INFO: Successfully updated pod "annotationupdate06799fff-597c-47d9-b104-d03279dbdbf6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:07:07.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2293" for this suite. • [SLOW TEST:6.843 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4132,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:07:07.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 6 22:07:07.355: INFO: Waiting up to 5m0s for pod "pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c" in namespace "emptydir-8689" to be "success or failure" Apr 6 22:07:07.358: INFO: Pod "pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.119317ms Apr 6 22:07:09.397: INFO: Pod "pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041730216s Apr 6 22:07:11.401: INFO: Pod "pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045797288s STEP: Saw pod success Apr 6 22:07:11.401: INFO: Pod "pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c" satisfied condition "success or failure" Apr 6 22:07:11.404: INFO: Trying to get logs from node jerma-worker pod pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c container test-container: STEP: delete the pod Apr 6 22:07:11.432: INFO: Waiting for pod pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c to disappear Apr 6 22:07:11.437: INFO: Pod pod-dafda570-a97e-4ab5-aff0-c4f38b21da0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:07:11.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8689" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:07:11.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 6 22:07:11.576: INFO: Waiting up to 5m0s for pod "pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28" in namespace "emptydir-9097" to be "success or failure" Apr 6 22:07:11.660: INFO: Pod "pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28": Phase="Pending", Reason="", readiness=false. Elapsed: 83.500164ms Apr 6 22:07:13.690: INFO: Pod "pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113479918s Apr 6 22:07:15.693: INFO: Pod "pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11718019s STEP: Saw pod success Apr 6 22:07:15.693: INFO: Pod "pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28" satisfied condition "success or failure" Apr 6 22:07:15.696: INFO: Trying to get logs from node jerma-worker2 pod pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28 container test-container: STEP: delete the pod Apr 6 22:07:15.733: INFO: Waiting for pod pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28 to disappear Apr 6 22:07:15.750: INFO: Pod pod-92c9654a-6599-4c71-8d3d-5b2f05c43a28 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:07:15.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9097" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4189,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:07:15.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 22:07:15.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591" in namespace "downward-api-545" to be "success or failure" Apr 6 22:07:15.870: INFO: Pod "downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238687ms Apr 6 22:07:17.873: INFO: Pod "downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007482939s Apr 6 22:07:19.877: INFO: Pod "downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011654205s STEP: Saw pod success Apr 6 22:07:19.877: INFO: Pod "downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591" satisfied condition "success or failure" Apr 6 22:07:19.880: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591 container client-container: STEP: delete the pod Apr 6 22:07:19.913: INFO: Waiting for pod downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591 to disappear Apr 6 22:07:19.923: INFO: Pod downwardapi-volume-4ce04e62-ce52-41b1-ad68-3c204b7bd591 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:07:19.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-545" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4202,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:07:19.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 22:07:20.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477" in namespace "projected-8570" to be "success or failure" Apr 6 22:07:20.038: INFO: Pod "downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477": Phase="Pending", Reason="", readiness=false. Elapsed: 15.049123ms Apr 6 22:07:22.055: INFO: Pod "downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031833449s Apr 6 22:07:24.059: INFO: Pod "downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036116517s STEP: Saw pod success Apr 6 22:07:24.059: INFO: Pod "downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477" satisfied condition "success or failure" Apr 6 22:07:24.062: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477 container client-container: STEP: delete the pod Apr 6 22:07:24.079: INFO: Waiting for pod downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477 to disappear Apr 6 22:07:24.090: INFO: Pod downwardapi-volume-6bf10666-a751-4358-8c9e-e51548bd1477 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:07:24.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8570" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4202,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:07:24.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:07:24.164: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 6 22:07:26.208: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:07:27.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5398" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":254,"skipped":4204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:07:27.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7156 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7156 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7156 Apr 6 22:07:27.827: INFO: Found 0 stateful pods, waiting for 1 Apr 6 22:07:37.832: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 6 22:07:37.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:07:38.058: INFO: stderr: "I0406 22:07:37.962440 3649 log.go:172] (0xc0000f4370) (0xc0009ae0a0) Create stream\nI0406 22:07:37.962497 3649 log.go:172] (0xc0000f4370) (0xc0009ae0a0) Stream added, broadcasting: 1\nI0406 22:07:37.964772 3649 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0406 22:07:37.964806 3649 log.go:172] (0xc0000f4370) (0xc000bf2000) Create stream\nI0406 22:07:37.964815 3649 log.go:172] (0xc0000f4370) (0xc000bf2000) Stream added, broadcasting: 3\nI0406 22:07:37.965777 3649 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0406 22:07:37.965817 3649 log.go:172] (0xc0000f4370) (0xc0009ae140) Create stream\nI0406 22:07:37.965839 3649 log.go:172] (0xc0000f4370) (0xc0009ae140) Stream added, broadcasting: 5\nI0406 22:07:37.966587 3649 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0406 22:07:38.021906 3649 log.go:172] (0xc0000f4370) Data frame received for 5\nI0406 22:07:38.021938 3649 log.go:172] (0xc0009ae140) (5) Data frame handling\nI0406 22:07:38.021963 3649 log.go:172] (0xc0009ae140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:07:38.051527 3649 log.go:172] (0xc0000f4370) Data frame received for 5\nI0406 22:07:38.051548 3649 log.go:172] (0xc0009ae140) (5) Data frame handling\nI0406 22:07:38.051567 3649 log.go:172] (0xc0000f4370) Data frame received for 3\nI0406 22:07:38.051592 3649 log.go:172] (0xc000bf2000) (3) Data frame handling\nI0406 22:07:38.051608 3649 log.go:172] (0xc000bf2000) (3) Data frame sent\nI0406 22:07:38.051624 3649 log.go:172] (0xc0000f4370) Data frame received for 3\nI0406 22:07:38.051638 3649 log.go:172] (0xc000bf2000) (3) Data frame handling\nI0406 22:07:38.053607 3649 log.go:172] (0xc0000f4370) Data frame received for 1\nI0406 22:07:38.053623 3649 log.go:172] (0xc0009ae0a0) (1) Data frame handling\nI0406 22:07:38.053631 3649 log.go:172] (0xc0009ae0a0) (1) Data frame sent\nI0406 22:07:38.053643 3649 log.go:172] (0xc0000f4370) (0xc0009ae0a0) Stream removed, broadcasting: 1\nI0406 22:07:38.053911 3649 log.go:172] (0xc0000f4370) (0xc0009ae0a0) Stream removed, broadcasting: 1\nI0406 22:07:38.053927 3649 log.go:172] (0xc0000f4370) (0xc000bf2000) Stream removed, broadcasting: 3\nI0406 22:07:38.053933 3649 log.go:172] (0xc0000f4370) (0xc0009ae140) Stream removed, broadcasting: 5\n" Apr 6 22:07:38.058: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:07:38.058: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:07:38.062: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 6 22:07:48.067: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:07:48.067: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 22:07:48.084: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999255s Apr 6 22:07:49.090: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993247625s Apr 6 22:07:50.095: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987618845s Apr 6 22:07:51.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983267717s Apr 6 22:07:52.102: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979613257s Apr 6 22:07:53.106: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975568151s Apr 6 22:07:54.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.9717008s Apr 6 22:07:55.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.956425169s Apr 6 22:07:56.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.951936287s Apr 6 22:07:57.134: INFO: Verifying statefulset ss doesn't scale past 1 for another 947.597294ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7156 Apr 6 22:07:58.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 22:07:58.331: INFO: stderr: "I0406 22:07:58.266998 3669 log.go:172] (0xc0000f4e70) (0xc00064ba40) Create stream\nI0406 22:07:58.267042 3669 log.go:172] (0xc0000f4e70) (0xc00064ba40) Stream added, broadcasting: 1\nI0406 22:07:58.269567 3669 log.go:172] (0xc0000f4e70) Reply frame received for 1\nI0406 22:07:58.269609 3669 log.go:172] (0xc0000f4e70) (0xc000ae0000) Create stream\nI0406 22:07:58.269625 3669 log.go:172] (0xc0000f4e70) (0xc000ae0000) Stream added, broadcasting: 3\nI0406 22:07:58.270556 3669 log.go:172] (0xc0000f4e70) Reply frame received for 3\nI0406 22:07:58.270579 3669 log.go:172] (0xc0000f4e70) (0xc00064bc20) Create stream\nI0406 22:07:58.270586 3669 log.go:172] (0xc0000f4e70) (0xc00064bc20) Stream added, broadcasting: 5\nI0406 22:07:58.271574 3669 log.go:172] (0xc0000f4e70) Reply frame received for 5\nI0406 22:07:58.324225 3669 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0406 22:07:58.324249 3669 log.go:172] (0xc000ae0000) (3) Data frame handling\nI0406 22:07:58.324262 3669 log.go:172] (0xc000ae0000) (3) Data frame sent\nI0406 22:07:58.324269 3669 log.go:172] (0xc0000f4e70) Data frame received for 3\nI0406 22:07:58.324274 3669 log.go:172] (0xc000ae0000) (3) Data frame handling\nI0406 22:07:58.324680 3669 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0406 22:07:58.324719 3669 log.go:172] (0xc00064bc20) (5) Data frame handling\nI0406 22:07:58.324746 3669 log.go:172] (0xc00064bc20) (5) Data frame sent\nI0406 22:07:58.324760 3669 log.go:172] (0xc0000f4e70) Data frame received for 5\nI0406 22:07:58.324770 3669 log.go:172] (0xc00064bc20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 22:07:58.326601 3669 log.go:172] (0xc0000f4e70) Data frame received for 1\nI0406 22:07:58.326643 3669 log.go:172] (0xc00064ba40) (1) Data frame handling\nI0406 22:07:58.326673 3669 log.go:172] (0xc00064ba40) (1) Data frame sent\nI0406 22:07:58.326698 3669 log.go:172] (0xc0000f4e70) (0xc00064ba40) Stream removed, broadcasting: 1\nI0406 22:07:58.326802 3669 log.go:172] (0xc0000f4e70) Go away received\nI0406 22:07:58.327184 3669 log.go:172] (0xc0000f4e70) (0xc00064ba40) Stream removed, broadcasting: 1\nI0406 22:07:58.327206 3669 log.go:172] (0xc0000f4e70) (0xc000ae0000) Stream removed, broadcasting: 3\nI0406 22:07:58.327217 3669 log.go:172] (0xc0000f4e70) (0xc00064bc20) Stream removed, broadcasting: 5\n" Apr 6 22:07:58.331: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 22:07:58.331: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 22:07:58.335: INFO: Found 1 stateful pods, waiting for 3 Apr 6 22:08:08.340: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:08:08.340: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 6 22:08:08.340: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 6 22:08:08.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:08:08.588: INFO: stderr: "I0406 22:08:08.482277 3691 log.go:172] (0xc0009ca000) (0xc00070da40) Create stream\nI0406 22:08:08.482345 3691 log.go:172] (0xc0009ca000) (0xc00070da40) Stream added, broadcasting: 1\nI0406 22:08:08.485415 3691 log.go:172] (0xc0009ca000) Reply frame received for 1\nI0406 22:08:08.485451 3691 log.go:172] (0xc0009ca000) (0xc00070dae0) Create stream\nI0406 22:08:08.485465 3691 log.go:172] (0xc0009ca000) (0xc00070dae0) Stream added, broadcasting: 3\nI0406 22:08:08.486584 3691 log.go:172] (0xc0009ca000) Reply frame received for 3\nI0406 22:08:08.486641 3691 log.go:172] (0xc0009ca000) (0xc0009c0000) Create stream\nI0406 22:08:08.486668 3691 log.go:172] (0xc0009ca000) (0xc0009c0000) Stream added, broadcasting: 5\nI0406 22:08:08.487707 3691 log.go:172] (0xc0009ca000) Reply frame received for 5\nI0406 22:08:08.581503 3691 log.go:172] (0xc0009ca000) Data frame received for 3\nI0406 22:08:08.581540 3691 log.go:172] (0xc00070dae0) (3) Data frame handling\nI0406 22:08:08.581576 3691 log.go:172] (0xc0009ca000) Data frame received for 5\nI0406 22:08:08.581632 3691 log.go:172] (0xc0009c0000) (5) Data frame handling\nI0406 22:08:08.581648 3691 log.go:172] (0xc0009c0000) (5) Data frame sent\nI0406 22:08:08.581663 3691 log.go:172] (0xc0009ca000) Data frame received for 5\nI0406 22:08:08.581677 3691 log.go:172] (0xc0009c0000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:08:08.581714 3691 log.go:172] (0xc00070dae0) (3) Data frame sent\nI0406 22:08:08.581731 3691 log.go:172] (0xc0009ca000) Data frame received for 3\nI0406 22:08:08.581740 3691 log.go:172] (0xc00070dae0) (3) Data frame handling\nI0406 22:08:08.583179 3691 log.go:172] (0xc0009ca000) Data frame received for 1\nI0406 22:08:08.583206 3691 log.go:172] (0xc00070da40) (1) Data frame handling\nI0406 22:08:08.583221 3691 log.go:172] (0xc00070da40) (1) Data frame sent\nI0406 22:08:08.583251 3691 log.go:172] (0xc0009ca000) (0xc00070da40) Stream removed, broadcasting: 1\nI0406 22:08:08.583316 3691 log.go:172] (0xc0009ca000) Go away received\nI0406 22:08:08.583698 3691 log.go:172] (0xc0009ca000) (0xc00070da40) Stream removed, broadcasting: 1\nI0406 22:08:08.583721 3691 log.go:172] (0xc0009ca000) (0xc00070dae0) Stream removed, broadcasting: 3\nI0406 22:08:08.583743 3691 log.go:172] (0xc0009ca000) (0xc0009c0000) Stream removed, broadcasting: 5\n" Apr 6 22:08:08.588: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:08:08.588: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:08:08.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:08:08.830: INFO: stderr: "I0406 22:08:08.716237 3713 log.go:172] (0xc000b380b0) (0xc00054f680) Create stream\nI0406 22:08:08.716287 3713 log.go:172] (0xc000b380b0) (0xc00054f680) Stream added, broadcasting: 1\nI0406 22:08:08.718722 3713 log.go:172] (0xc000b380b0) Reply frame received for 1\nI0406 22:08:08.718767 3713 log.go:172] (0xc000b380b0) (0xc0009d4000) Create stream\nI0406 22:08:08.718778 3713 log.go:172] (0xc000b380b0) (0xc0009d4000) Stream added, broadcasting: 3\nI0406 22:08:08.719844 3713 log.go:172] (0xc000b380b0) Reply frame received for 3\nI0406 22:08:08.719902 3713 log.go:172] (0xc000b380b0) (0xc000729c20) Create stream\nI0406 22:08:08.719932 3713 log.go:172] (0xc000b380b0) (0xc000729c20) Stream added, broadcasting: 5\nI0406 22:08:08.721423 3713 log.go:172] (0xc000b380b0) Reply frame received for 5\nI0406 22:08:08.780154 3713 log.go:172] (0xc000b380b0) Data frame received for 5\nI0406 22:08:08.780176 3713 log.go:172] (0xc000729c20) (5) Data frame handling\nI0406 22:08:08.780192 3713 log.go:172] (0xc000729c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:08:08.823798 3713 log.go:172] (0xc000b380b0) Data frame received for 3\nI0406 22:08:08.823850 3713 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0406 22:08:08.823890 3713 log.go:172] (0xc0009d4000) (3) Data frame sent\nI0406 22:08:08.823930 3713 log.go:172] (0xc000b380b0) Data frame received for 3\nI0406 22:08:08.823945 3713 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0406 22:08:08.823964 3713 log.go:172] (0xc000b380b0) Data frame received for 5\nI0406 22:08:08.823974 3713 log.go:172] (0xc000729c20) (5) Data frame handling\nI0406 22:08:08.825527 3713 log.go:172] (0xc000b380b0) Data frame received for 1\nI0406 22:08:08.825539 3713 log.go:172] (0xc00054f680) (1) Data frame handling\nI0406 22:08:08.825547 3713 log.go:172] (0xc00054f680) (1) Data frame sent\nI0406 22:08:08.825561 3713 log.go:172] (0xc000b380b0) (0xc00054f680) Stream removed, broadcasting: 1\nI0406 22:08:08.825571 3713 log.go:172] (0xc000b380b0) Go away received\nI0406 22:08:08.826072 3713 log.go:172] (0xc000b380b0) (0xc00054f680) Stream removed, broadcasting: 1\nI0406 22:08:08.826100 3713 log.go:172] (0xc000b380b0) (0xc0009d4000) Stream removed, broadcasting: 3\nI0406 22:08:08.826113 3713 log.go:172] (0xc000b380b0) (0xc000729c20) Stream removed, broadcasting: 5\n" Apr 6 22:08:08.830: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:08:08.830: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:08:08.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 6 22:08:09.071: INFO: stderr: "I0406 22:08:08.964705 3733 log.go:172] (0xc0005ae840) (0xc000570000) Create stream\nI0406 22:08:08.964760 3733 log.go:172] (0xc0005ae840) (0xc000570000) Stream added, broadcasting: 1\nI0406 22:08:08.966978 3733 log.go:172] (0xc0005ae840) Reply frame received for 1\nI0406 22:08:08.967022 3733 log.go:172] (0xc0005ae840) (0xc0006c5b80) Create stream\nI0406 22:08:08.967030 3733 log.go:172] (0xc0005ae840) (0xc0006c5b80) Stream added, broadcasting: 3\nI0406 22:08:08.967905 3733 log.go:172] (0xc0005ae840) Reply frame received for 3\nI0406 22:08:08.967932 3733 log.go:172] (0xc0005ae840) (0xc000570140) Create stream\nI0406 22:08:08.967939 3733 log.go:172] (0xc0005ae840) (0xc000570140) Stream added, broadcasting: 5\nI0406 22:08:08.968936 3733 log.go:172] (0xc0005ae840) Reply frame received for 5\nI0406 22:08:09.034144 3733 log.go:172] (0xc0005ae840) Data frame received for 5\nI0406 22:08:09.034181 3733 log.go:172] (0xc000570140) (5) Data frame handling\nI0406 22:08:09.034207 3733 log.go:172] (0xc000570140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0406 22:08:09.063687 3733 log.go:172] (0xc0005ae840) Data frame received for 5\nI0406 22:08:09.063735 3733 log.go:172] (0xc000570140) (5) Data frame handling\nI0406 22:08:09.063767 3733 log.go:172] (0xc0005ae840) Data frame received for 3\nI0406 22:08:09.063786 3733 log.go:172] (0xc0006c5b80) (3) Data frame handling\nI0406 22:08:09.063799 3733 log.go:172] (0xc0006c5b80) (3) Data frame sent\nI0406 22:08:09.063814 3733 log.go:172] (0xc0005ae840) Data frame received for 3\nI0406 22:08:09.063823 3733 log.go:172] (0xc0006c5b80) (3) Data frame handling\nI0406 22:08:09.065828 3733 log.go:172] (0xc0005ae840) Data frame received for 1\nI0406 22:08:09.065863 3733 log.go:172] (0xc000570000) (1) Data frame handling\nI0406 22:08:09.065892 3733 log.go:172] (0xc000570000) (1) Data frame sent\nI0406 22:08:09.065916 3733 log.go:172] (0xc0005ae840) (0xc000570000) Stream removed, broadcasting: 1\nI0406 22:08:09.065940 3733 log.go:172] (0xc0005ae840) Go away received\nI0406 22:08:09.066365 3733 log.go:172] (0xc0005ae840) (0xc000570000) Stream removed, broadcasting: 1\nI0406 22:08:09.066386 3733 log.go:172] (0xc0005ae840) (0xc0006c5b80) Stream removed, broadcasting: 3\nI0406 22:08:09.066401 3733 log.go:172] (0xc0005ae840) (0xc000570140) Stream removed, broadcasting: 5\n" Apr 6 22:08:09.071: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 6 22:08:09.071: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 6 22:08:09.071: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 22:08:09.074: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 6 22:08:19.083: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:08:19.083: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:08:19.083: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 6 22:08:19.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999256s Apr 6 22:08:20.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989731475s Apr 6 22:08:21.110: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984443767s Apr 6 22:08:22.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979633379s Apr 6 22:08:23.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974484486s Apr 6 22:08:24.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969402251s Apr 6 22:08:25.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964629594s Apr 6 22:08:26.135: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959610331s Apr 6 22:08:27.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954881267s Apr 6 22:08:28.145: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.86805ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7156 Apr 6 22:08:29.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 22:08:29.361: INFO: stderr: "I0406 22:08:29.273258 3755 log.go:172] (0xc000ac2a50) (0xc000ab2000) Create stream\nI0406 22:08:29.273354 3755 log.go:172] (0xc000ac2a50) (0xc000ab2000) Stream added, broadcasting: 1\nI0406 22:08:29.275272 3755 log.go:172] (0xc000ac2a50) Reply frame received for 1\nI0406 22:08:29.275295 3755 log.go:172] (0xc000ac2a50) (0xc00074a320) Create stream\nI0406 22:08:29.275301 3755 log.go:172] (0xc000ac2a50) (0xc00074a320) Stream added, broadcasting: 3\nI0406 22:08:29.276124 3755 log.go:172] (0xc000ac2a50) Reply frame received for 3\nI0406 22:08:29.276158 3755 log.go:172] (0xc000ac2a50) (0xc000ab2140) Create stream\nI0406 22:08:29.276165 3755 log.go:172] (0xc000ac2a50) (0xc000ab2140) Stream added, broadcasting: 5\nI0406 22:08:29.276864 3755 log.go:172] (0xc000ac2a50) Reply frame received for 5\nI0406 22:08:29.354661 3755 log.go:172] (0xc000ac2a50) Data frame received for 3\nI0406 22:08:29.354693 3755 log.go:172] (0xc00074a320) (3) Data frame handling\nI0406 22:08:29.354717 3755 log.go:172] (0xc00074a320) (3) Data frame sent\nI0406 22:08:29.354747 3755 log.go:172] (0xc000ac2a50) Data frame received for 5\nI0406 22:08:29.354768 3755 log.go:172] (0xc000ab2140) (5) Data frame handling\nI0406 22:08:29.354776 3755 log.go:172] (0xc000ab2140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 22:08:29.354801 3755 log.go:172] (0xc000ac2a50) Data frame received for 3\nI0406 22:08:29.354818 3755 log.go:172] (0xc00074a320) (3) Data frame handling\nI0406 22:08:29.354880 3755 log.go:172] (0xc000ac2a50) Data frame received for 5\nI0406 22:08:29.354905 3755 log.go:172] (0xc000ab2140) (5) Data frame handling\nI0406 22:08:29.355992 3755 log.go:172] (0xc000ac2a50) Data frame received for 1\nI0406 22:08:29.356020 3755 log.go:172] (0xc000ab2000) (1) Data frame handling\nI0406 22:08:29.356042 3755 log.go:172] (0xc000ab2000) (1) Data frame sent\nI0406 22:08:29.356076 3755 log.go:172] (0xc000ac2a50) (0xc000ab2000) Stream removed, broadcasting: 1\nI0406 22:08:29.356488 3755 log.go:172] (0xc000ac2a50) (0xc000ab2000) Stream removed, broadcasting: 1\nI0406 22:08:29.356510 3755 log.go:172] (0xc000ac2a50) (0xc00074a320) Stream removed, broadcasting: 3\nI0406 22:08:29.356522 3755 log.go:172] (0xc000ac2a50) (0xc000ab2140) Stream removed, broadcasting: 5\n" Apr 6 22:08:29.361: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 22:08:29.361: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 22:08:29.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 22:08:29.621: INFO: stderr: "I0406 22:08:29.531406 3775 log.go:172] (0xc000105290) (0xc0009b4280) Create stream\nI0406 22:08:29.531503 3775 log.go:172] (0xc000105290) (0xc0009b4280) Stream added, broadcasting: 1\nI0406 22:08:29.534210 3775 log.go:172] (0xc000105290) Reply frame received for 1\nI0406 22:08:29.534262 3775 log.go:172] (0xc000105290) (0xc000552640) Create stream\nI0406 22:08:29.534281 3775 log.go:172] (0xc000105290) (0xc000552640) Stream added, broadcasting: 3\nI0406 22:08:29.535171 3775 log.go:172] (0xc000105290) Reply frame received for 3\nI0406 22:08:29.535202 3775 log.go:172] (0xc000105290) (0xc0009b4320) Create stream\nI0406 22:08:29.535211 3775 log.go:172] (0xc000105290) (0xc0009b4320) Stream added, broadcasting: 5\nI0406 22:08:29.536147 3775 log.go:172] (0xc000105290) Reply frame received for 5\nI0406 22:08:29.614208 3775 log.go:172] (0xc000105290) Data frame received for 5\nI0406 22:08:29.614240 3775 log.go:172] (0xc0009b4320) (5) Data frame handling\nI0406 22:08:29.614248 3775 log.go:172] (0xc0009b4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 22:08:29.614261 3775 log.go:172] (0xc000105290) Data frame received for 3\nI0406 22:08:29.614267 3775 log.go:172] (0xc000552640) (3) Data frame handling\nI0406 22:08:29.614274 3775 log.go:172] (0xc000552640) (3) Data frame sent\nI0406 22:08:29.614279 3775 log.go:172] (0xc000105290) Data frame received for 3\nI0406 22:08:29.614293 3775 log.go:172] (0xc000552640) (3) Data frame handling\nI0406 22:08:29.614618 3775 log.go:172] (0xc000105290) Data frame received for 5\nI0406 22:08:29.614652 3775 log.go:172] (0xc0009b4320) (5) Data frame handling\nI0406 22:08:29.616239 3775 log.go:172] (0xc000105290) Data frame received for 1\nI0406 22:08:29.616284 3775 log.go:172] (0xc0009b4280) (1) Data frame handling\nI0406 22:08:29.616317 3775 log.go:172] (0xc0009b4280) (1) Data frame sent\nI0406 22:08:29.616342 3775 log.go:172] (0xc000105290) (0xc0009b4280) Stream removed, broadcasting: 1\nI0406 22:08:29.616551 3775 log.go:172] (0xc000105290) Go away received\nI0406 22:08:29.616938 3775 log.go:172] (0xc000105290) (0xc0009b4280) Stream removed, broadcasting: 1\nI0406 22:08:29.616978 3775 log.go:172] (0xc000105290) (0xc000552640) Stream removed, broadcasting: 3\nI0406 22:08:29.616998 3775 log.go:172] (0xc000105290) (0xc0009b4320) Stream removed, broadcasting: 5\n" Apr 6 22:08:29.621: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 22:08:29.621: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 22:08:29.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7156 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 6 22:08:29.806: INFO: stderr: "I0406 22:08:29.746702 3795 log.go:172] (0xc000c1c0b0) (0xc000b9c0a0) Create stream\nI0406 22:08:29.746793 3795 log.go:172] (0xc000c1c0b0) (0xc000b9c0a0) Stream added, broadcasting: 1\nI0406 22:08:29.752179 3795 log.go:172] (0xc000c1c0b0) Reply frame received for 1\nI0406 22:08:29.752227 3795 log.go:172] (0xc000c1c0b0) (0xc000635a40) Create stream\nI0406 22:08:29.752240 3795 log.go:172] (0xc000c1c0b0) (0xc000635a40) Stream added, broadcasting: 3\nI0406 22:08:29.753368 3795 log.go:172] (0xc000c1c0b0) Reply frame received for 3\nI0406 22:08:29.753412 3795 log.go:172] (0xc000c1c0b0) (0xc0005d05a0) Create stream\nI0406 22:08:29.753429 3795 log.go:172] (0xc000c1c0b0) (0xc0005d05a0) Stream added, broadcasting: 5\nI0406 22:08:29.754265 3795 log.go:172] (0xc000c1c0b0) Reply frame received for 5\nI0406 22:08:29.800011 3795 log.go:172] (0xc000c1c0b0) Data frame received for 5\nI0406 22:08:29.800040 3795 log.go:172] (0xc0005d05a0) (5) Data frame handling\nI0406 22:08:29.800062 3795 log.go:172] (0xc0005d05a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0406 22:08:29.800675 3795 log.go:172] (0xc000c1c0b0) Data frame received for 5\nI0406 22:08:29.800719 3795 log.go:172] (0xc000c1c0b0) Data frame received for 3\nI0406 22:08:29.800756 3795 log.go:172] (0xc000635a40) (3) Data frame handling\nI0406 22:08:29.800778 3795 log.go:172] (0xc000635a40) (3) Data frame sent\nI0406 22:08:29.800805 3795 log.go:172] (0xc0005d05a0) (5) Data frame handling\nI0406 22:08:29.801003 3795 log.go:172] (0xc000c1c0b0) Data frame received for 3\nI0406 22:08:29.801010 3795 log.go:172] (0xc000635a40) (3) Data frame handling\nI0406 22:08:29.802482 3795 log.go:172] (0xc000c1c0b0) Data frame received for 1\nI0406 22:08:29.802494 3795 log.go:172] (0xc000b9c0a0) (1) Data frame handling\nI0406 22:08:29.802501 3795 log.go:172] (0xc000b9c0a0) (1) Data frame sent\nI0406 22:08:29.802514 3795 log.go:172] (0xc000c1c0b0) (0xc000b9c0a0) Stream removed, broadcasting: 1\nI0406 22:08:29.802610 3795 log.go:172] (0xc000c1c0b0) Go away received\nI0406 22:08:29.802778 3795 log.go:172] (0xc000c1c0b0) (0xc000b9c0a0) Stream removed, broadcasting: 1\nI0406 22:08:29.802791 3795 log.go:172] (0xc000c1c0b0) (0xc000635a40) Stream removed, broadcasting: 3\nI0406 22:08:29.802797 3795 log.go:172] (0xc000c1c0b0) (0xc0005d05a0) Stream removed, broadcasting: 5\n" Apr 6 22:08:29.806: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 6 22:08:29.806: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 6 22:08:29.806: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 6 22:08:49.820: INFO: Deleting all statefulset in ns statefulset-7156 Apr 6 22:08:49.822: INFO: Scaling statefulset ss to 0 Apr 6 22:08:49.831: INFO: Waiting for statefulset status.replicas updated to 0 Apr 6 22:08:49.833: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:08:49.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7156" for this suite. • [SLOW TEST:82.545 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":255,"skipped":4231,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:08:49.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4 Apr 6 22:08:49.968: INFO: Pod name my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4: Found 0 pods out of 1 Apr 6 22:08:54.984: INFO: Pod name my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4: Found 1 pods out of 1 Apr 6 22:08:54.985: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4" are running Apr 6 22:08:54.990: INFO: Pod "my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4-9bcbs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 22:08:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 22:08:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 22:08:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-06 22:08:49 +0000 UTC Reason: Message:}]) Apr 6 22:08:54.990: INFO: Trying to dial the pod Apr 6 22:09:00.030: INFO: Controller my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4: Got expected result from replica 1 [my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4-9bcbs]: "my-hostname-basic-ded72050-6ae9-408f-9402-28a5848c5ab4-9bcbs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:00.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-869" for this suite. • [SLOW TEST:10.159 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":256,"skipped":4237,"failed":0} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:00.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 6 22:09:04.633: INFO: Successfully updated pod "adopt-release-26x9x" STEP: Checking that the Job readopts the Pod Apr 6 22:09:04.633: INFO: Waiting up to 15m0s for pod "adopt-release-26x9x" in namespace "job-8929" to be "adopted" Apr 6 22:09:04.666: INFO: Pod "adopt-release-26x9x": Phase="Running", Reason="", readiness=true. Elapsed: 32.951078ms Apr 6 22:09:06.671: INFO: Pod "adopt-release-26x9x": Phase="Running", Reason="", readiness=true. Elapsed: 2.036999491s Apr 6 22:09:06.671: INFO: Pod "adopt-release-26x9x" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 6 22:09:07.179: INFO: Successfully updated pod "adopt-release-26x9x" STEP: Checking that the Job releases the Pod Apr 6 22:09:07.179: INFO: Waiting up to 15m0s for pod "adopt-release-26x9x" in namespace "job-8929" to be "released" Apr 6 22:09:07.185: INFO: Pod "adopt-release-26x9x": Phase="Running", Reason="", readiness=true. Elapsed: 5.208368ms Apr 6 22:09:09.188: INFO: Pod "adopt-release-26x9x": Phase="Running", Reason="", readiness=true. Elapsed: 2.008997152s Apr 6 22:09:09.188: INFO: Pod "adopt-release-26x9x" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:09.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8929" for this suite. • [SLOW TEST:9.158 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":257,"skipped":4239,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:09.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 6 22:09:13.792: INFO: Successfully updated pod "annotationupdatefc184848-0325-47c8-850f-c7d0b046ef7b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:15.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4368" for this suite. • [SLOW TEST:6.650 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:15.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-947b3335-92bf-48f8-9fbf-34d29577a235 STEP: Creating a pod to test consume configMaps Apr 6 22:09:16.008: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d" in namespace "projected-5434" to be "success or failure" Apr 6 22:09:16.011: INFO: Pod "pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.793758ms Apr 6 22:09:18.036: INFO: Pod "pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028258996s Apr 6 22:09:20.040: INFO: Pod "pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031838213s STEP: Saw pod success Apr 6 22:09:20.040: INFO: Pod "pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d" satisfied condition "success or failure" Apr 6 22:09:20.057: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d container projected-configmap-volume-test: STEP: delete the pod Apr 6 22:09:20.076: INFO: Waiting for pod pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d to disappear Apr 6 22:09:20.080: INFO: Pod pod-projected-configmaps-040f4d57-55f8-4efe-9a7c-b96895c00f6d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:20.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5434" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:20.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7413 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7413 I0406 22:09:20.210591 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7413, replica count: 2 I0406 22:09:23.261268 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 22:09:26.261513 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 6 22:09:26.261: INFO: Creating new exec pod Apr 6 22:09:31.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7413 execpodlt2pd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 6 22:09:31.520: INFO: stderr: "I0406 22:09:31.415062 3816 log.go:172] (0xc00010a9a0) (0xc00094e000) Create stream\nI0406 22:09:31.415128 3816 log.go:172] (0xc00010a9a0) (0xc00094e000) Stream added, broadcasting: 1\nI0406 22:09:31.417707 3816 log.go:172] (0xc00010a9a0) Reply frame received for 1\nI0406 22:09:31.417757 3816 log.go:172] (0xc00010a9a0) (0xc00094e0a0) Create stream\nI0406 22:09:31.417778 3816 log.go:172] (0xc00010a9a0) (0xc00094e0a0) Stream added, broadcasting: 3\nI0406 22:09:31.418963 3816 log.go:172] (0xc00010a9a0) Reply frame received for 3\nI0406 22:09:31.418996 3816 log.go:172] (0xc00010a9a0) (0xc0005c99a0) Create stream\nI0406 22:09:31.419010 3816 log.go:172] (0xc00010a9a0) (0xc0005c99a0) Stream added, broadcasting: 5\nI0406 22:09:31.420016 3816 log.go:172] (0xc00010a9a0) Reply frame received for 5\nI0406 22:09:31.514167 3816 log.go:172] (0xc00010a9a0) Data frame received for 5\nI0406 22:09:31.514208 3816 log.go:172] (0xc0005c99a0) (5) Data frame handling\nI0406 22:09:31.514221 3816 log.go:172] (0xc0005c99a0) (5) Data frame sent\nI0406 22:09:31.514228 3816 log.go:172] (0xc00010a9a0) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nI0406 22:09:31.514248 3816 log.go:172] (0xc00010a9a0) Data frame received for 3\nI0406 22:09:31.514266 3816 log.go:172] (0xc00094e0a0) (3) Data frame handling\nI0406 22:09:31.514280 3816 log.go:172] (0xc0005c99a0) (5) Data frame handling\nI0406 22:09:31.514290 3816 log.go:172] (0xc0005c99a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0406 22:09:31.514714 3816 log.go:172] (0xc00010a9a0) Data frame received for 5\nI0406 22:09:31.514733 3816 log.go:172] (0xc0005c99a0) (5) Data frame handling\nI0406 22:09:31.515916 3816 log.go:172] (0xc00010a9a0) Data frame received for 1\nI0406 22:09:31.515931 3816 log.go:172] (0xc00094e000) (1) Data frame handling\nI0406 22:09:31.515940 3816 log.go:172] (0xc00094e000) (1) Data frame sent\nI0406 22:09:31.515961 3816 log.go:172] (0xc00010a9a0) (0xc00094e000) Stream removed, broadcasting: 1\nI0406 22:09:31.515987 3816 log.go:172] (0xc00010a9a0) Go away received\nI0406 22:09:31.516353 3816 log.go:172] (0xc00010a9a0) (0xc00094e000) Stream removed, broadcasting: 1\nI0406 22:09:31.516375 3816 log.go:172] (0xc00010a9a0) (0xc00094e0a0) Stream removed, broadcasting: 3\nI0406 22:09:31.516388 3816 log.go:172] (0xc00010a9a0) (0xc0005c99a0) Stream removed, broadcasting: 5\n" Apr 6 22:09:31.521: INFO: stdout: "" Apr 6 22:09:31.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7413 execpodlt2pd -- /bin/sh -x -c nc -zv -t -w 2 10.107.190.4 80' Apr 6 22:09:31.722: INFO: stderr: "I0406 22:09:31.648304 3838 log.go:172] (0xc0000f42c0) (0xc0006b8820) Create stream\nI0406 22:09:31.648394 3838 log.go:172] (0xc0000f42c0) (0xc0006b8820) Stream added, broadcasting: 1\nI0406 22:09:31.651601 3838 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0406 22:09:31.651656 3838 log.go:172] (0xc0000f42c0) (0xc000711d60) Create stream\nI0406 22:09:31.651670 3838 log.go:172] (0xc0000f42c0) (0xc000711d60) Stream added, broadcasting: 3\nI0406 22:09:31.652689 3838 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0406 22:09:31.652731 3838 log.go:172] (0xc0000f42c0) (0xc0006f95e0) Create stream\nI0406 22:09:31.652747 3838 log.go:172] (0xc0000f42c0) (0xc0006f95e0) Stream added, broadcasting: 5\nI0406 22:09:31.653892 3838 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0406 22:09:31.715952 3838 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0406 22:09:31.715974 3838 log.go:172] (0xc0006f95e0) (5) Data frame handling\nI0406 22:09:31.715997 3838 log.go:172] (0xc0006f95e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.190.4 80\nI0406 22:09:31.716471 3838 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0406 22:09:31.716567 3838 log.go:172] (0xc0006f95e0) (5) Data frame handling\nI0406 22:09:31.716596 3838 log.go:172] (0xc0006f95e0) (5) Data frame sent\nConnection to 10.107.190.4 80 port [tcp/http] succeeded!\nI0406 22:09:31.716637 3838 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0406 22:09:31.716661 3838 log.go:172] (0xc0006f95e0) (5) Data frame handling\nI0406 22:09:31.716914 3838 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0406 22:09:31.716945 3838 log.go:172] (0xc000711d60) (3) Data frame handling\nI0406 22:09:31.718325 3838 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0406 22:09:31.718400 3838 log.go:172] (0xc0006b8820) (1) Data frame handling\nI0406 22:09:31.718436 3838 log.go:172] (0xc0006b8820) (1) Data frame sent\nI0406 22:09:31.718461 3838 log.go:172] (0xc0000f42c0) (0xc0006b8820) Stream removed, broadcasting: 1\nI0406 22:09:31.718484 3838 log.go:172] (0xc0000f42c0) Go away received\nI0406 22:09:31.718908 3838 log.go:172] (0xc0000f42c0) (0xc0006b8820) Stream removed, broadcasting: 1\nI0406 22:09:31.718923 3838 log.go:172] (0xc0000f42c0) (0xc000711d60) Stream removed, broadcasting: 3\nI0406 22:09:31.718930 3838 log.go:172] (0xc0000f42c0) (0xc0006f95e0) Stream removed, broadcasting: 5\n" Apr 6 22:09:31.722: INFO: stdout: "" Apr 6 22:09:31.722: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:31.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7413" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.691 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":260,"skipped":4328,"failed":0} SSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:31.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:31.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-875" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":261,"skipped":4332,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:31.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:32.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1794" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":262,"skipped":4339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:32.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 22:09:32.851: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 22:09:34.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807772, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807772, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807773, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807772, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 22:09:37.890: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:09:47.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6836" for this suite. STEP: Destroying namespace "webhook-6836-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.102 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":263,"skipped":4376,"failed":0} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:09:48.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 6 22:09:56.244: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:09:56.256: INFO: Pod pod-with-prestop-exec-hook still exists Apr 6 22:09:58.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:09:58.260: INFO: Pod pod-with-prestop-exec-hook still exists Apr 6 22:10:00.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:10:00.260: INFO: Pod pod-with-prestop-exec-hook still exists Apr 6 22:10:02.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:10:02.260: INFO: Pod pod-with-prestop-exec-hook still exists Apr 6 22:10:04.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:10:04.260: INFO: Pod pod-with-prestop-exec-hook still exists Apr 6 22:10:06.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:10:06.260: INFO: Pod pod-with-prestop-exec-hook still exists Apr 6 22:10:08.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:10:08.260: INFO: Pod pod-with-prestop-exec-hook still exists Apr 6 22:10:10.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 6 22:10:10.260: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:10:10.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5773" for this suite. • [SLOW TEST:22.153 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4376,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:10:10.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 6 22:10:14.522: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:10:14.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9708" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4390,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:10:14.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:10:14.621: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fde588fa-82e4-40fe-944c-13df3cf46240" in namespace "security-context-test-6128" to be "success or failure" Apr 6 22:10:14.627: INFO: Pod "busybox-user-65534-fde588fa-82e4-40fe-944c-13df3cf46240": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430884ms Apr 6 22:10:16.699: INFO: Pod "busybox-user-65534-fde588fa-82e4-40fe-944c-13df3cf46240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078577835s Apr 6 22:10:18.703: INFO: Pod "busybox-user-65534-fde588fa-82e4-40fe-944c-13df3cf46240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082608769s Apr 6 22:10:18.703: INFO: Pod "busybox-user-65534-fde588fa-82e4-40fe-944c-13df3cf46240" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:10:18.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6128" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4403,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:10:18.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1683 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1683 I0406 22:10:18.853212 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1683, replica count: 2 I0406 22:10:21.903608 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0406 22:10:24.903847 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 6 22:10:24.903: INFO: Creating new exec pod Apr 6 22:10:29.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1683 execpodjgs75 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 6 22:10:30.158: INFO: stderr: "I0406 22:10:30.069916 3859 log.go:172] (0xc000934630) (0xc0005e9b80) Create stream\nI0406 22:10:30.069984 3859 log.go:172] (0xc000934630) (0xc0005e9b80) Stream added, broadcasting: 1\nI0406 22:10:30.072611 3859 log.go:172] (0xc000934630) Reply frame received for 1\nI0406 22:10:30.072666 3859 log.go:172] (0xc000934630) (0xc0005e9d60) Create stream\nI0406 22:10:30.072692 3859 log.go:172] (0xc000934630) (0xc0005e9d60) Stream added, broadcasting: 3\nI0406 22:10:30.073868 3859 log.go:172] (0xc000934630) Reply frame received for 3\nI0406 22:10:30.073928 3859 log.go:172] (0xc000934630) (0xc000a30000) Create stream\nI0406 22:10:30.073947 3859 log.go:172] (0xc000934630) (0xc000a30000) Stream added, broadcasting: 5\nI0406 22:10:30.074843 3859 log.go:172] (0xc000934630) Reply frame received for 5\nI0406 22:10:30.152648 3859 log.go:172] (0xc000934630) Data frame received for 5\nI0406 22:10:30.152711 3859 log.go:172] (0xc000a30000) (5) Data frame handling\nI0406 22:10:30.152734 3859 log.go:172] (0xc000a30000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0406 22:10:30.152762 3859 log.go:172] (0xc000934630) Data frame received for 3\nI0406 22:10:30.152776 3859 log.go:172] (0xc0005e9d60) (3) Data frame handling\nI0406 22:10:30.152809 3859 log.go:172] (0xc000934630) Data frame received for 5\nI0406 22:10:30.152829 3859 log.go:172] (0xc000a30000) (5) Data frame handling\nI0406 22:10:30.154639 3859 log.go:172] (0xc000934630) Data frame received for 1\nI0406 22:10:30.154677 3859 log.go:172] (0xc0005e9b80) (1) Data frame handling\nI0406 22:10:30.154698 3859 log.go:172] (0xc0005e9b80) (1) Data frame sent\nI0406 22:10:30.154721 3859 log.go:172] (0xc000934630) (0xc0005e9b80) Stream removed, broadcasting: 1\nI0406 22:10:30.154753 3859 log.go:172] (0xc000934630) Go away received\nI0406 22:10:30.155111 3859 log.go:172] (0xc000934630) (0xc0005e9b80) Stream removed, broadcasting: 1\nI0406 22:10:30.155132 3859 log.go:172] (0xc000934630) (0xc0005e9d60) Stream removed, broadcasting: 3\nI0406 22:10:30.155141 3859 log.go:172] (0xc000934630) (0xc000a30000) Stream removed, broadcasting: 5\n" Apr 6 22:10:30.159: INFO: stdout: "" Apr 6 22:10:30.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1683 execpodjgs75 -- /bin/sh -x -c nc -zv -t -w 2 10.107.56.125 80' Apr 6 22:10:30.398: INFO: stderr: "I0406 22:10:30.318398 3879 log.go:172] (0xc000106bb0) (0xc0009841e0) Create stream\nI0406 22:10:30.318460 3879 log.go:172] (0xc000106bb0) (0xc0009841e0) Stream added, broadcasting: 1\nI0406 22:10:30.321028 3879 log.go:172] (0xc000106bb0) Reply frame received for 1\nI0406 22:10:30.321072 3879 log.go:172] (0xc000106bb0) (0xc00064c780) Create stream\nI0406 22:10:30.321085 3879 log.go:172] (0xc000106bb0) (0xc00064c780) Stream added, broadcasting: 3\nI0406 22:10:30.322263 3879 log.go:172] (0xc000106bb0) Reply frame received for 3\nI0406 22:10:30.322329 3879 log.go:172] (0xc000106bb0) (0xc000984280) Create stream\nI0406 22:10:30.322363 3879 log.go:172] (0xc000106bb0) (0xc000984280) Stream added, broadcasting: 5\nI0406 22:10:30.323349 3879 log.go:172] (0xc000106bb0) Reply frame received for 5\nI0406 22:10:30.393597 3879 log.go:172] (0xc000106bb0) Data frame received for 3\nI0406 22:10:30.393744 3879 log.go:172] (0xc00064c780) (3) Data frame handling\nI0406 22:10:30.393777 3879 log.go:172] (0xc000106bb0) Data frame received for 5\nI0406 22:10:30.393791 3879 log.go:172] (0xc000984280) (5) Data frame handling\nI0406 22:10:30.393804 3879 log.go:172] (0xc000984280) (5) Data frame sent\nI0406 22:10:30.393818 3879 log.go:172] (0xc000106bb0) Data frame received for 5\nI0406 22:10:30.393837 3879 log.go:172] (0xc000984280) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.56.125 80\nConnection to 10.107.56.125 80 port [tcp/http] succeeded!\nI0406 22:10:30.394754 3879 log.go:172] (0xc000106bb0) Data frame received for 1\nI0406 22:10:30.394799 3879 log.go:172] (0xc0009841e0) (1) Data frame handling\nI0406 22:10:30.394823 3879 log.go:172] (0xc0009841e0) (1) Data frame sent\nI0406 22:10:30.394848 3879 log.go:172] (0xc000106bb0) (0xc0009841e0) Stream removed, broadcasting: 1\nI0406 22:10:30.394874 3879 log.go:172] (0xc000106bb0) Go away received\nI0406 22:10:30.395200 3879 log.go:172] (0xc000106bb0) (0xc0009841e0) Stream removed, broadcasting: 1\nI0406 22:10:30.395216 3879 log.go:172] (0xc000106bb0) (0xc00064c780) Stream removed, broadcasting: 3\nI0406 22:10:30.395223 3879 log.go:172] (0xc000106bb0) (0xc000984280) Stream removed, broadcasting: 5\n" Apr 6 22:10:30.399: INFO: stdout: "" Apr 6 22:10:30.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1683 execpodjgs75 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32086' Apr 6 22:10:30.596: INFO: stderr: "I0406 22:10:30.524017 3900 log.go:172] (0xc0000f54a0) (0xc000665ae0) Create stream\nI0406 22:10:30.524069 3900 log.go:172] (0xc0000f54a0) (0xc000665ae0) Stream added, broadcasting: 1\nI0406 22:10:30.533483 3900 log.go:172] (0xc0000f54a0) Reply frame received for 1\nI0406 22:10:30.533536 3900 log.go:172] (0xc0000f54a0) (0xc000665cc0) Create stream\nI0406 22:10:30.533549 3900 log.go:172] (0xc0000f54a0) (0xc000665cc0) Stream added, broadcasting: 3\nI0406 22:10:30.535304 3900 log.go:172] (0xc0000f54a0) Reply frame received for 3\nI0406 22:10:30.535343 3900 log.go:172] (0xc0000f54a0) (0xc0006ba000) Create stream\nI0406 22:10:30.535354 3900 log.go:172] (0xc0000f54a0) (0xc0006ba000) Stream added, broadcasting: 5\nI0406 22:10:30.537031 3900 log.go:172] (0xc0000f54a0) Reply frame received for 5\nI0406 22:10:30.589265 3900 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0406 22:10:30.589287 3900 log.go:172] (0xc0006ba000) (5) Data frame handling\nI0406 22:10:30.589295 3900 log.go:172] (0xc0006ba000) (5) Data frame sent\nI0406 22:10:30.589300 3900 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0406 22:10:30.589306 3900 log.go:172] (0xc0006ba000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32086\nConnection to 172.17.0.10 32086 port [tcp/32086] succeeded!\nI0406 22:10:30.589327 3900 log.go:172] (0xc0006ba000) (5) Data frame sent\nI0406 22:10:30.589757 3900 log.go:172] (0xc0000f54a0) Data frame received for 3\nI0406 22:10:30.589781 3900 log.go:172] (0xc000665cc0) (3) Data frame handling\nI0406 22:10:30.589857 3900 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0406 22:10:30.589870 3900 log.go:172] (0xc0006ba000) (5) Data frame handling\nI0406 22:10:30.591524 3900 log.go:172] (0xc0000f54a0) Data frame received for 1\nI0406 22:10:30.591547 3900 log.go:172] (0xc000665ae0) (1) Data frame handling\nI0406 22:10:30.591567 3900 log.go:172] (0xc000665ae0) (1) Data frame sent\nI0406 22:10:30.591584 3900 log.go:172] (0xc0000f54a0) (0xc000665ae0) Stream removed, broadcasting: 1\nI0406 22:10:30.591612 3900 log.go:172] (0xc0000f54a0) Go away received\nI0406 22:10:30.592013 3900 log.go:172] (0xc0000f54a0) (0xc000665ae0) Stream removed, broadcasting: 1\nI0406 22:10:30.592043 3900 log.go:172] (0xc0000f54a0) (0xc000665cc0) Stream removed, broadcasting: 3\nI0406 22:10:30.592055 3900 log.go:172] (0xc0000f54a0) (0xc0006ba000) Stream removed, broadcasting: 5\n" Apr 6 22:10:30.596: INFO: stdout: "" Apr 6 22:10:30.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1683 execpodjgs75 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32086' Apr 6 22:10:30.801: INFO: stderr: "I0406 22:10:30.732752 3922 log.go:172] (0xc0003db080) (0xc0006cfe00) Create stream\nI0406 22:10:30.732805 3922 log.go:172] (0xc0003db080) (0xc0006cfe00) Stream added, broadcasting: 1\nI0406 22:10:30.735232 3922 log.go:172] (0xc0003db080) Reply frame received for 1\nI0406 22:10:30.735275 3922 log.go:172] (0xc0003db080) (0xc000a3c000) Create stream\nI0406 22:10:30.735294 3922 log.go:172] (0xc0003db080) (0xc000a3c000) Stream added, broadcasting: 3\nI0406 22:10:30.736022 3922 log.go:172] (0xc0003db080) Reply frame received for 3\nI0406 22:10:30.736046 3922 log.go:172] (0xc0003db080) (0xc000a3c0a0) Create stream\nI0406 22:10:30.736061 3922 log.go:172] (0xc0003db080) (0xc000a3c0a0) Stream added, broadcasting: 5\nI0406 22:10:30.736765 3922 log.go:172] (0xc0003db080) Reply frame received for 5\nI0406 22:10:30.793970 3922 log.go:172] (0xc0003db080) Data frame received for 3\nI0406 22:10:30.793997 3922 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0406 22:10:30.794096 3922 log.go:172] (0xc0003db080) Data frame received for 5\nI0406 22:10:30.794122 3922 log.go:172] (0xc000a3c0a0) (5) Data frame handling\nI0406 22:10:30.794149 3922 log.go:172] (0xc000a3c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32086\nConnection to 172.17.0.8 32086 port [tcp/32086] succeeded!\nI0406 22:10:30.794192 3922 log.go:172] (0xc0003db080) Data frame received for 5\nI0406 22:10:30.794215 3922 log.go:172] (0xc000a3c0a0) (5) Data frame handling\nI0406 22:10:30.795771 3922 log.go:172] (0xc0003db080) Data frame received for 1\nI0406 22:10:30.795900 3922 log.go:172] (0xc0006cfe00) (1) Data frame handling\nI0406 22:10:30.795950 3922 log.go:172] (0xc0006cfe00) (1) Data frame sent\nI0406 22:10:30.795987 3922 log.go:172] (0xc0003db080) (0xc0006cfe00) Stream removed, broadcasting: 1\nI0406 22:10:30.796021 3922 log.go:172] (0xc0003db080) Go away received\nI0406 22:10:30.796553 3922 log.go:172] (0xc0003db080) (0xc0006cfe00) Stream removed, broadcasting: 1\nI0406 22:10:30.796588 3922 log.go:172] (0xc0003db080) (0xc000a3c000) Stream removed, broadcasting: 3\nI0406 22:10:30.796618 3922 log.go:172] (0xc0003db080) (0xc000a3c0a0) Stream removed, broadcasting: 5\n" Apr 6 22:10:30.801: INFO: stdout: "" Apr 6 22:10:30.801: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:10:30.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1683" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.154 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":267,"skipped":4420,"failed":0} SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:10:30.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-57d46612-5e91-42b6-b3ff-7c7f194e37f2 STEP: Creating secret with name secret-projected-all-test-volume-a1bcd12c-c899-4a3b-831e-23d48c77fda2 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 6 22:10:31.004: INFO: Waiting up to 5m0s for pod "projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250" in namespace "projected-3843" to be "success or failure" Apr 6 22:10:31.021: INFO: Pod "projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250": Phase="Pending", Reason="", readiness=false. Elapsed: 16.071114ms Apr 6 22:10:33.024: INFO: Pod "projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019910207s Apr 6 22:10:35.038: INFO: Pod "projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033012695s STEP: Saw pod success Apr 6 22:10:35.038: INFO: Pod "projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250" satisfied condition "success or failure" Apr 6 22:10:35.040: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250 container projected-all-volume-test: STEP: delete the pod Apr 6 22:10:35.058: INFO: Waiting for pod projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250 to disappear Apr 6 22:10:35.088: INFO: Pod projected-volume-eff81536-5aa0-43c0-ac07-9bacd58c8250 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:10:35.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3843" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4423,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:10:35.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:10:40.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7024" for this suite. • [SLOW TEST:5.098 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":269,"skipped":4445,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:10:40.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 6 22:10:41.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 6 22:10:43.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807841, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807841, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807841, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721807841, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 6 22:10:46.164: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:10:46.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4787" for this suite. STEP: Destroying namespace "webhook-4787-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.226 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":270,"skipped":4452,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:10:46.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:10:46.748: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 6 22:10:46.780: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:46.784: INFO: Number of nodes with available pods: 0 Apr 6 22:10:46.784: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:10:47.790: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:47.793: INFO: Number of nodes with available pods: 0 Apr 6 22:10:47.793: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:10:48.791: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:48.794: INFO: Number of nodes with available pods: 0 Apr 6 22:10:48.794: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:10:49.796: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:49.800: INFO: Number of nodes with available pods: 1 Apr 6 22:10:49.800: INFO: Node jerma-worker is running more than one daemon pod Apr 6 22:10:50.790: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:50.793: INFO: Number of nodes with available pods: 2 Apr 6 22:10:50.793: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 6 22:10:50.837: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:50.837: INFO: Wrong image for pod: daemon-set-dfjts. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:50.854: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:51.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:51.859: INFO: Wrong image for pod: daemon-set-dfjts. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:51.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:52.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:52.859: INFO: Wrong image for pod: daemon-set-dfjts. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:52.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:53.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:53.859: INFO: Wrong image for pod: daemon-set-dfjts. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:53.859: INFO: Pod daemon-set-dfjts is not available Apr 6 22:10:53.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:54.859: INFO: Pod daemon-set-6bxbl is not available Apr 6 22:10:54.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:54.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:55.858: INFO: Pod daemon-set-6bxbl is not available Apr 6 22:10:55.858: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:55.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:56.859: INFO: Pod daemon-set-6bxbl is not available Apr 6 22:10:56.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:56.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:57.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:57.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:58.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:58.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:10:58.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:10:59.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:10:59.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:10:59.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:00.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:00.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:00.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:01.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:01.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:01.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:02.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:02.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:02.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:03.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:03.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:03.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:04.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:04.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:04.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:05.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:05.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:05.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:06.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:06.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:06.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:07.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:07.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:07.863: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:08.859: INFO: Wrong image for pod: daemon-set-dd756. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 6 22:11:08.859: INFO: Pod daemon-set-dd756 is not available Apr 6 22:11:08.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:09.859: INFO: Pod daemon-set-chltf is not available Apr 6 22:11:09.864: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 6 22:11:09.868: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:09.871: INFO: Number of nodes with available pods: 1 Apr 6 22:11:09.871: INFO: Node jerma-worker2 is running more than one daemon pod Apr 6 22:11:10.876: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:10.879: INFO: Number of nodes with available pods: 1 Apr 6 22:11:10.879: INFO: Node jerma-worker2 is running more than one daemon pod Apr 6 22:11:11.875: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:11.878: INFO: Number of nodes with available pods: 1 Apr 6 22:11:11.878: INFO: Node jerma-worker2 is running more than one daemon pod Apr 6 22:11:12.876: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 6 22:11:12.897: INFO: Number of nodes with available pods: 2 Apr 6 22:11:12.897: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8807, will wait for the garbage collector to delete the pods Apr 6 22:11:12.970: INFO: Deleting DaemonSet.extensions daemon-set took: 5.667879ms Apr 6 22:11:13.270: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.243181ms Apr 6 22:11:19.574: INFO: Number of nodes with available pods: 0 Apr 6 22:11:19.574: INFO: Number of running nodes: 0, number of available pods: 0 Apr 6 22:11:19.576: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8807/daemonsets","resourceVersion":"5993630"},"items":null} Apr 6 22:11:19.578: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8807/pods","resourceVersion":"5993630"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:11:19.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8807" for this suite. • [SLOW TEST:33.173 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":271,"skipped":4463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:11:19.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8379/configmap-test-efdc4abb-4865-4c40-8c6b-81f3e78338ca STEP: Creating a pod to test consume configMaps Apr 6 22:11:19.691: INFO: Waiting up to 5m0s for pod "pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09" in namespace "configmap-8379" to be "success or failure" Apr 6 22:11:19.695: INFO: Pod "pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09": Phase="Pending", Reason="", readiness=false. Elapsed: 3.785585ms Apr 6 22:11:21.699: INFO: Pod "pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007693575s Apr 6 22:11:23.717: INFO: Pod "pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02590355s STEP: Saw pod success Apr 6 22:11:23.717: INFO: Pod "pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09" satisfied condition "success or failure" Apr 6 22:11:23.720: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09 container env-test: STEP: delete the pod Apr 6 22:11:23.755: INFO: Waiting for pod pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09 to disappear Apr 6 22:11:23.759: INFO: Pod pod-configmaps-721197a6-2cf5-41ab-87f1-f32d87affd09 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:11:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8379" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:11:23.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-301c97e7-7ae7-495b-b3aa-42c49efe960d in namespace container-probe-2307 Apr 6 22:11:27.852: INFO: Started pod liveness-301c97e7-7ae7-495b-b3aa-42c49efe960d in namespace container-probe-2307 STEP: checking the pod's current state and verifying that restartCount is present Apr 6 22:11:27.854: INFO: Initial restart count of pod liveness-301c97e7-7ae7-495b-b3aa-42c49efe960d is 0 Apr 6 22:11:51.910: INFO: Restart count of pod container-probe-2307/liveness-301c97e7-7ae7-495b-b3aa-42c49efe960d is now 1 (24.0554716s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:11:51.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2307" for this suite. • [SLOW TEST:28.182 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4534,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:11:51.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4767.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4767.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4767.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 6 22:11:56.119: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.122: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.126: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.128: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.136: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.139: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.142: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.145: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:11:56.151: INFO: Lookups using dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local] Apr 6 22:12:01.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.171: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.174: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.181: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.184: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.187: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.190: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:01.195: INFO: Lookups using dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local] Apr 6 22:12:06.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.161: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.164: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.167: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.176: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.178: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.181: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.184: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:06.190: INFO: Lookups using dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local] Apr 6 22:12:11.162: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.170: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.173: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.182: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.184: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.187: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.190: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:11.195: INFO: Lookups using dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local] Apr 6 22:12:16.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.160: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.163: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.167: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.176: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.180: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.183: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.186: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:16.192: INFO: Lookups using dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local] Apr 6 22:12:21.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.160: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.163: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.167: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.180: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.200: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.216: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.220: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local from pod dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5: the server could not find the requested resource (get pods dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5) Apr 6 22:12:21.225: INFO: Lookups using dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4767.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4767.svc.cluster.local jessie_udp@dns-test-service-2.dns-4767.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4767.svc.cluster.local] Apr 6 22:12:26.190: INFO: DNS probes using dns-4767/dns-test-fb2f062b-5629-47d4-9a51-c7f2d06318f5 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:12:26.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4767" for this suite. • [SLOW TEST:34.730 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":274,"skipped":4543,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:12:26.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 6 22:12:26.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 6 22:12:26.878: INFO: stderr: "" Apr 6 22:12:26.878: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:12:26.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8445" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":275,"skipped":4545,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:12:26.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 6 22:12:26.966: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52" in namespace "projected-8002" to be "success or failure" Apr 6 22:12:26.972: INFO: Pod "downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.333524ms Apr 6 22:12:28.976: INFO: Pod "downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009293772s Apr 6 22:12:30.980: INFO: Pod "downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013445192s STEP: Saw pod success Apr 6 22:12:30.980: INFO: Pod "downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52" satisfied condition "success or failure" Apr 6 22:12:30.984: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52 container client-container: STEP: delete the pod Apr 6 22:12:31.038: INFO: Waiting for pod downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52 to disappear Apr 6 22:12:31.055: INFO: Pod downwardapi-volume-1fa73969-67d8-4e06-bc0b-0800a2eade52 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:12:31.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8002" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4545,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:12:31.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-430f9a16-b478-49bc-b09e-50566b3089c9 in namespace container-probe-9082 Apr 6 22:12:35.162: INFO: Started pod busybox-430f9a16-b478-49bc-b09e-50566b3089c9 in namespace container-probe-9082 STEP: checking the pod's current state and verifying that restartCount is present Apr 6 22:12:35.165: INFO: Initial restart count of pod busybox-430f9a16-b478-49bc-b09e-50566b3089c9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:16:35.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9082" for this suite. • [SLOW TEST:244.918 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4557,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 6 22:16:36.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 6 22:16:40.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1802" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":278,"skipped":4558,"failed":0} SSSSSSApr 6 22:16:40.665: INFO: Running AfterSuite actions on all nodes Apr 6 22:16:40.665: INFO: Running AfterSuite actions on node 1 Apr 6 22:16:40.665: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4218.525 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS