I0510 21:08:51.983769 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0510 21:08:51.984128 6 e2e.go:109] Starting e2e run "2ee887b1-94aa-4a9e-bb1e-b5a00d2c8458" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589144930 - Will randomize all specs Will run 278 of 4842 specs May 10 21:08:52.047: INFO: >>> kubeConfig: /root/.kube/config May 10 21:08:52.052: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 10 21:08:52.075: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 10 21:08:52.105: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 10 21:08:52.105: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 10 21:08:52.105: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 10 21:08:52.116: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 10 21:08:52.116: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 10 21:08:52.116: INFO: e2e test version: v1.17.4 May 10 21:08:52.117: INFO: kube-apiserver version: v1.17.2 May 10 21:08:52.117: INFO: >>> kubeConfig: /root/.kube/config May 10 21:08:52.122: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:08:52.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir May 10 21:08:52.188: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 10 21:08:52.194: INFO: Waiting up to 5m0s for pod "pod-e6673be6-2050-459c-b445-e9e08efc5ddf" in namespace "emptydir-3459" to be "success or failure" May 10 21:08:52.197: INFO: Pod "pod-e6673be6-2050-459c-b445-e9e08efc5ddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.64493ms May 10 21:08:54.212: INFO: Pod "pod-e6673be6-2050-459c-b445-e9e08efc5ddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017722152s May 10 21:08:56.215: INFO: Pod "pod-e6673be6-2050-459c-b445-e9e08efc5ddf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020870852s STEP: Saw pod success May 10 21:08:56.215: INFO: Pod "pod-e6673be6-2050-459c-b445-e9e08efc5ddf" satisfied condition "success or failure" May 10 21:08:56.218: INFO: Trying to get logs from node jerma-worker2 pod pod-e6673be6-2050-459c-b445-e9e08efc5ddf container test-container: STEP: delete the pod May 10 21:08:56.247: INFO: Waiting for pod pod-e6673be6-2050-459c-b445-e9e08efc5ddf to disappear May 10 21:08:56.251: INFO: Pod pod-e6673be6-2050-459c-b445-e9e08efc5ddf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:08:56.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3459" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:08:56.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 10 21:08:56.534: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 10 21:08:56.638: INFO: Waiting for terminating namespaces to be deleted... May 10 21:08:56.641: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 10 21:08:56.671: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:08:56.672: INFO: Container kindnet-cni ready: true, restart count 0 May 10 21:08:56.672: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:08:56.672: INFO: Container kube-proxy ready: true, restart count 0 May 10 21:08:56.672: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 10 21:08:56.676: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:08:56.676: INFO: Container kindnet-cni ready: true, restart count 0 May 10 21:08:56.676: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 10 21:08:56.676: INFO: Container kube-bench ready: false, restart count 0 May 10 21:08:56.677: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:08:56.677: INFO: Container kube-proxy ready: true, restart count 0 May 10 21:08:56.677: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 10 21:08:56.677: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8d727357-443f-4dc1-bcf3-76800689422c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-8d727357-443f-4dc1-bcf3-76800689422c off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8d727357-443f-4dc1-bcf3-76800689422c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:09:12.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-895" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.703 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":2,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:09:12.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 10 21:09:13.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-873 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 10 21:09:19.664: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0510 21:09:19.515022 28 log.go:172] (0xc000b220b0) (0xc000b5c140) Create stream\nI0510 21:09:19.515110 28 log.go:172] (0xc000b220b0) (0xc000b5c140) Stream added, broadcasting: 1\nI0510 21:09:19.521735 28 log.go:172] (0xc000b220b0) Reply frame received for 1\nI0510 21:09:19.521779 28 log.go:172] (0xc000b220b0) (0xc000b5c000) Create stream\nI0510 21:09:19.521789 28 log.go:172] (0xc000b220b0) (0xc000b5c000) Stream added, broadcasting: 3\nI0510 21:09:19.522822 28 log.go:172] (0xc000b220b0) Reply frame received for 3\nI0510 21:09:19.522906 28 log.go:172] (0xc000b220b0) (0xc00075b360) Create stream\nI0510 21:09:19.522924 28 log.go:172] (0xc000b220b0) (0xc00075b360) Stream added, broadcasting: 5\nI0510 21:09:19.523885 28 log.go:172] (0xc000b220b0) Reply frame received for 5\nI0510 21:09:19.523922 28 log.go:172] (0xc000b220b0) (0xc00075b400) Create stream\nI0510 21:09:19.523935 28 log.go:172] (0xc000b220b0) (0xc00075b400) Stream added, broadcasting: 7\nI0510 21:09:19.524914 28 log.go:172] (0xc000b220b0) Reply frame received for 7\nI0510 21:09:19.525057 28 log.go:172] (0xc000b5c000) (3) Writing data frame\nI0510 21:09:19.525388 28 log.go:172] (0xc000b5c000) (3) Writing data frame\nI0510 21:09:19.526328 28 log.go:172] (0xc000b220b0) Data frame received for 5\nI0510 21:09:19.526343 28 log.go:172] (0xc00075b360) (5) Data frame handling\nI0510 21:09:19.526359 28 log.go:172] (0xc00075b360) (5) Data frame sent\nI0510 21:09:19.527940 28 log.go:172] (0xc000b220b0) Data frame received for 5\nI0510 21:09:19.527960 28 log.go:172] (0xc00075b360) (5) Data frame handling\nI0510 21:09:19.527976 28 log.go:172] (0xc00075b360) (5) Data frame sent\nI0510 21:09:19.567593 28 log.go:172] (0xc000b220b0) Data frame received for 7\nI0510 21:09:19.567641 28 log.go:172] (0xc000b220b0) Data frame received for 5\nI0510 21:09:19.567673 28 log.go:172] (0xc00075b360) (5) Data frame handling\nI0510 21:09:19.567706 28 log.go:172] (0xc00075b400) (7) Data frame handling\nI0510 21:09:19.567752 28 log.go:172] (0xc000b220b0) Data frame received for 1\nI0510 21:09:19.567841 28 log.go:172] (0xc000b5c140) (1) Data frame handling\nI0510 21:09:19.567960 28 log.go:172] (0xc000b5c140) (1) Data frame sent\nI0510 21:09:19.568007 28 log.go:172] (0xc000b220b0) (0xc000b5c000) Stream removed, broadcasting: 3\nI0510 21:09:19.568044 28 log.go:172] (0xc000b220b0) (0xc000b5c140) Stream removed, broadcasting: 1\nI0510 21:09:19.568084 28 log.go:172] (0xc000b220b0) Go away received\nI0510 21:09:19.568433 28 log.go:172] (0xc000b220b0) (0xc000b5c140) Stream removed, broadcasting: 1\nI0510 21:09:19.568452 28 log.go:172] (0xc000b220b0) (0xc000b5c000) Stream removed, broadcasting: 3\nI0510 21:09:19.568460 28 log.go:172] (0xc000b220b0) (0xc00075b360) Stream removed, broadcasting: 5\nI0510 21:09:19.568468 28 log.go:172] (0xc000b220b0) (0xc00075b400) Stream removed, broadcasting: 7\n" May 10 21:09:19.664: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:09:21.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-873" for this suite. • [SLOW TEST:8.715 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":3,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:09:21.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:09:25.823: INFO: Waiting up to 5m0s for pod "client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2" in namespace "pods-4327" to be "success or failure" May 10 21:09:25.827: INFO: Pod "client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338806ms May 10 21:09:27.832: INFO: Pod "client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008873223s May 10 21:09:29.836: INFO: Pod "client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013368522s STEP: Saw pod success May 10 21:09:29.836: INFO: Pod "client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2" satisfied condition "success or failure" May 10 21:09:29.839: INFO: Trying to get logs from node jerma-worker pod client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2 container env3cont: STEP: delete the pod May 10 21:09:29.975: INFO: Waiting for pod client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2 to disappear May 10 21:09:29.979: INFO: Pod client-envvars-207bf0b8-a7d5-46f5-84a6-9599c218e4b2 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:09:29.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4327" for this suite. • [SLOW TEST:8.322 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":101,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:09:30.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:09:30.653: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:09:32.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741770, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741770, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741770, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741770, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:09:35.695: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:09:35.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3563-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:09:36.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9990" for this suite. STEP: Destroying namespace "webhook-9990-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.079 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":5,"skipped":101,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:09:37.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 10 21:09:37.183: INFO: Waiting up to 5m0s for pod "pod-7f3cbf34-ff75-4580-abcd-71526cf1541e" in namespace "emptydir-4776" to be "success or failure" May 10 21:09:37.198: INFO: Pod "pod-7f3cbf34-ff75-4580-abcd-71526cf1541e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.468469ms May 10 21:09:39.201: INFO: Pod "pod-7f3cbf34-ff75-4580-abcd-71526cf1541e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017134632s May 10 21:09:41.204: INFO: Pod "pod-7f3cbf34-ff75-4580-abcd-71526cf1541e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021037317s STEP: Saw pod success May 10 21:09:41.205: INFO: Pod "pod-7f3cbf34-ff75-4580-abcd-71526cf1541e" satisfied condition "success or failure" May 10 21:09:41.208: INFO: Trying to get logs from node jerma-worker2 pod pod-7f3cbf34-ff75-4580-abcd-71526cf1541e container test-container: STEP: delete the pod May 10 21:09:41.230: INFO: Waiting for pod pod-7f3cbf34-ff75-4580-abcd-71526cf1541e to disappear May 10 21:09:41.234: INFO: Pod pod-7f3cbf34-ff75-4580-abcd-71526cf1541e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:09:41.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4776" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:09:41.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 10 21:09:41.296: INFO: Waiting up to 5m0s for pod "pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5" in namespace "emptydir-5583" to be "success or failure" May 10 21:09:41.311: INFO: Pod "pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.807639ms May 10 21:09:43.316: INFO: Pod "pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019477772s May 10 21:09:45.320: INFO: Pod "pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023815965s STEP: Saw pod success May 10 21:09:45.320: INFO: Pod "pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5" satisfied condition "success or failure" May 10 21:09:45.323: INFO: Trying to get logs from node jerma-worker2 pod pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5 container test-container: STEP: delete the pod May 10 21:09:45.344: INFO: Waiting for pod pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5 to disappear May 10 21:09:45.348: INFO: Pod pod-35d79b6f-d72e-4377-ad88-eaf1ebfb93e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:09:45.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5583" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":176,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:09:45.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 10 21:09:45.421: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 10 21:09:46.080: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 10 21:09:48.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741786, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741786, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741786, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741786, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 21:09:51.170: INFO: Waited 621.876212ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:09:51.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3660" for this suite. • [SLOW TEST:6.393 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":8,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:09:51.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-340 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-340 STEP: Deleting pre-stop pod May 10 21:10:04.907: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:10:04.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-340" for this suite. • [SLOW TEST:13.192 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":9,"skipped":207,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:10:04.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 10 21:10:05.336: INFO: >>> kubeConfig: /root/.kube/config May 10 21:10:08.399: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:10:18.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9469" for this suite. • [SLOW TEST:14.011 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":10,"skipped":208,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:10:18.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:10:19.010: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:10:20.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9280" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":11,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:10:20.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:10:20.648: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:10:22.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724741820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:10:25.702: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 10 21:10:29.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-883 to-be-attached-pod -i -c=container1' May 10 21:10:29.864: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:10:29.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-883" for this suite. STEP: Destroying namespace "webhook-883-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.747 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":12,"skipped":235,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:10:29.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 10 21:10:30.048: INFO: Waiting up to 5m0s for pod "client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8" in namespace "containers-9486" to be "success or failure" May 10 21:10:30.052: INFO: Pod "client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03552ms May 10 21:10:32.056: INFO: Pod "client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00817657s May 10 21:10:34.059: INFO: Pod "client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01147296s STEP: Saw pod success May 10 21:10:34.060: INFO: Pod "client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8" satisfied condition "success or failure" May 10 21:10:34.062: INFO: Trying to get logs from node jerma-worker pod client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8 container test-container: STEP: delete the pod May 10 21:10:34.123: INFO: Waiting for pod client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8 to disappear May 10 21:10:34.125: INFO: Pod client-containers-34fec63a-9cf7-4e3a-ac75-e3efda2116f8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:10:34.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9486" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":240,"failed":0} ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:10:34.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8c0c2ab0-c81f-4da8-b2d6-e34b3bd87ee9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8c0c2ab0-c81f-4da8-b2d6-e34b3bd87ee9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:10:40.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6007" for this suite. • [SLOW TEST:6.186 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:10:40.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5249 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-5249 May 10 21:10:40.429: INFO: Found 0 stateful pods, waiting for 1 May 10 21:10:50.434: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 10 21:10:50.475: INFO: Deleting all statefulset in ns statefulset-5249 May 10 21:10:50.515: INFO: Scaling statefulset ss to 0 May 10 21:11:00.572: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:11:00.575: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:11:00.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5249" for this suite. • [SLOW TEST:20.291 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":15,"skipped":275,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:11:00.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 10 21:11:00.813: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6192 /api/v1/namespaces/watch-6192/configmaps/e2e-watch-test-label-changed 5d8b9aca-59fb-478c-8f5b-d3a2688c1fb5 15058334 0 2020-05-10 21:11:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 10 21:11:00.813: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6192 /api/v1/namespaces/watch-6192/configmaps/e2e-watch-test-label-changed 5d8b9aca-59fb-478c-8f5b-d3a2688c1fb5 15058336 0 2020-05-10 21:11:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 10 21:11:00.814: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6192 /api/v1/namespaces/watch-6192/configmaps/e2e-watch-test-label-changed 5d8b9aca-59fb-478c-8f5b-d3a2688c1fb5 15058338 0 2020-05-10 21:11:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 10 21:11:10.863: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6192 /api/v1/namespaces/watch-6192/configmaps/e2e-watch-test-label-changed 5d8b9aca-59fb-478c-8f5b-d3a2688c1fb5 15058390 0 2020-05-10 21:11:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 10 21:11:10.863: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6192 /api/v1/namespaces/watch-6192/configmaps/e2e-watch-test-label-changed 5d8b9aca-59fb-478c-8f5b-d3a2688c1fb5 15058391 0 2020-05-10 21:11:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 10 21:11:10.863: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6192 /api/v1/namespaces/watch-6192/configmaps/e2e-watch-test-label-changed 5d8b9aca-59fb-478c-8f5b-d3a2688c1fb5 15058392 0 2020-05-10 21:11:00 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:11:10.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6192" for this suite. • [SLOW TEST:10.257 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":16,"skipped":283,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:11:10.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 10 21:11:11.024: INFO: Waiting up to 5m0s for pod "pod-6202ad30-671e-43aa-95ee-2527824c21f6" in namespace "emptydir-9220" to be "success or failure" May 10 21:11:11.028: INFO: Pod "pod-6202ad30-671e-43aa-95ee-2527824c21f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.383337ms May 10 21:11:13.052: INFO: Pod "pod-6202ad30-671e-43aa-95ee-2527824c21f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02777063s May 10 21:11:15.055: INFO: Pod "pod-6202ad30-671e-43aa-95ee-2527824c21f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030949717s STEP: Saw pod success May 10 21:11:15.055: INFO: Pod "pod-6202ad30-671e-43aa-95ee-2527824c21f6" satisfied condition "success or failure" May 10 21:11:15.058: INFO: Trying to get logs from node jerma-worker2 pod pod-6202ad30-671e-43aa-95ee-2527824c21f6 container test-container: STEP: delete the pod May 10 21:11:15.119: INFO: Waiting for pod pod-6202ad30-671e-43aa-95ee-2527824c21f6 to disappear May 10 21:11:15.122: INFO: Pod pod-6202ad30-671e-43aa-95ee-2527824c21f6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:11:15.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9220" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":287,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:11:15.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc in namespace container-probe-4556 May 10 21:11:19.220: INFO: Started pod liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc in namespace container-probe-4556 STEP: checking the pod's current state and verifying that restartCount is present May 10 21:11:19.244: INFO: Initial restart count of pod liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc is 0 May 10 21:11:39.308: INFO: Restart count of pod container-probe-4556/liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc is now 1 (20.064457331s elapsed) May 10 21:11:59.404: INFO: Restart count of pod container-probe-4556/liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc is now 2 (40.160331324s elapsed) May 10 21:12:19.490: INFO: Restart count of pod container-probe-4556/liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc is now 3 (1m0.246232002s elapsed) May 10 21:12:39.534: INFO: Restart count of pod container-probe-4556/liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc is now 4 (1m20.290175221s elapsed) May 10 21:13:43.670: INFO: Restart count of pod container-probe-4556/liveness-050c4e21-f153-4628-a3f4-5ec1dce22bbc is now 5 (2m24.426003776s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:13:43.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4556" for this suite. • [SLOW TEST:148.586 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:13:43.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 10 21:13:43.811: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:44.062: INFO: Number of nodes with available pods: 0 May 10 21:13:44.062: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:45.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:45.070: INFO: Number of nodes with available pods: 0 May 10 21:13:45.070: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:46.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:46.082: INFO: Number of nodes with available pods: 0 May 10 21:13:46.082: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:47.241: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:47.245: INFO: Number of nodes with available pods: 0 May 10 21:13:47.245: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:48.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:48.071: INFO: Number of nodes with available pods: 1 May 10 21:13:48.071: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:13:49.067: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:49.070: INFO: Number of nodes with available pods: 2 May 10 21:13:49.070: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 10 21:13:49.105: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:49.107: INFO: Number of nodes with available pods: 1 May 10 21:13:49.107: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:50.112: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:50.115: INFO: Number of nodes with available pods: 1 May 10 21:13:50.115: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:51.112: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:51.116: INFO: Number of nodes with available pods: 1 May 10 21:13:51.116: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:52.112: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:52.116: INFO: Number of nodes with available pods: 1 May 10 21:13:52.116: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:53.113: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:53.117: INFO: Number of nodes with available pods: 1 May 10 21:13:53.117: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:54.111: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:54.113: INFO: Number of nodes with available pods: 1 May 10 21:13:54.113: INFO: Node jerma-worker is running more than one daemon pod May 10 21:13:55.113: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:13:55.116: INFO: Number of nodes with available pods: 2 May 10 21:13:55.116: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-151, will wait for the garbage collector to delete the pods May 10 21:13:55.178: INFO: Deleting DaemonSet.extensions daemon-set took: 6.8792ms May 10 21:13:55.578: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.276795ms May 10 21:14:09.281: INFO: Number of nodes with available pods: 0 May 10 21:14:09.281: INFO: Number of running nodes: 0, number of available pods: 0 May 10 21:14:09.288: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-151/daemonsets","resourceVersion":"15059045"},"items":null} May 10 21:14:09.306: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-151/pods","resourceVersion":"15059045"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:09.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-151" for this suite. • [SLOW TEST:25.604 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":19,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:09.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:14:09.427: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598" in namespace "downward-api-1932" to be "success or failure" May 10 21:14:09.429: INFO: Pod "downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649473ms May 10 21:14:11.433: INFO: Pod "downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00661555s May 10 21:14:13.438: INFO: Pod "downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01116887s STEP: Saw pod success May 10 21:14:13.438: INFO: Pod "downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598" satisfied condition "success or failure" May 10 21:14:13.442: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598 container client-container: STEP: delete the pod May 10 21:14:13.471: INFO: Waiting for pod downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598 to disappear May 10 21:14:13.491: INFO: Pod downwardapi-volume-9766ba94-f990-4cd5-a38a-00810bc1b598 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:13.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1932" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:13.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:19.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8063" for this suite. STEP: Destroying namespace "nsdeletetest-7033" for this suite. May 10 21:14:19.827: INFO: Namespace nsdeletetest-7033 was already deleted STEP: Destroying namespace "nsdeletetest-6816" for this suite. • [SLOW TEST:6.340 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":21,"skipped":374,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:19.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-45ffa438-2521-4494-b3a7-f866ed88b3d1 STEP: Creating a pod to test consume secrets May 10 21:14:20.039: INFO: Waiting up to 5m0s for pod "pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076" in namespace "secrets-750" to be "success or failure" May 10 21:14:20.067: INFO: Pod "pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076": Phase="Pending", Reason="", readiness=false. Elapsed: 27.447832ms May 10 21:14:22.073: INFO: Pod "pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033450994s May 10 21:14:24.077: INFO: Pod "pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037838323s STEP: Saw pod success May 10 21:14:24.077: INFO: Pod "pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076" satisfied condition "success or failure" May 10 21:14:24.080: INFO: Trying to get logs from node jerma-worker pod pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076 container secret-volume-test: STEP: delete the pod May 10 21:14:24.144: INFO: Waiting for pod pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076 to disappear May 10 21:14:24.378: INFO: Pod pod-secrets-cbbbdf30-ee03-4cfd-b08c-e1733d554076 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:24.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-750" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":391,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:24.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:14:24.692: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 10 21:14:27.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5222 create -f -' May 10 21:14:30.651: INFO: stderr: "" May 10 21:14:30.651: INFO: stdout: "e2e-test-crd-publish-openapi-8521-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 10 21:14:30.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5222 delete e2e-test-crd-publish-openapi-8521-crds test-cr' May 10 21:14:30.763: INFO: stderr: "" May 10 21:14:30.763: INFO: stdout: "e2e-test-crd-publish-openapi-8521-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 10 21:14:30.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5222 apply -f -' May 10 21:14:31.023: INFO: stderr: "" May 10 21:14:31.023: INFO: stdout: "e2e-test-crd-publish-openapi-8521-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 10 21:14:31.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5222 delete e2e-test-crd-publish-openapi-8521-crds test-cr' May 10 21:14:31.152: INFO: stderr: "" May 10 21:14:31.152: INFO: stdout: "e2e-test-crd-publish-openapi-8521-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 10 21:14:31.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8521-crds' May 10 21:14:31.402: INFO: stderr: "" May 10 21:14:31.402: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8521-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:34.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5222" for this suite. • [SLOW TEST:9.899 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":23,"skipped":400,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:34.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:14:34.381: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6c624cce-d4d2-4240-bbbe-8267dca43413" in namespace "security-context-test-548" to be "success or failure" May 10 21:14:34.387: INFO: Pod "alpine-nnp-false-6c624cce-d4d2-4240-bbbe-8267dca43413": Phase="Pending", Reason="", readiness=false. Elapsed: 5.834449ms May 10 21:14:36.391: INFO: Pod "alpine-nnp-false-6c624cce-d4d2-4240-bbbe-8267dca43413": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009987843s May 10 21:14:38.395: INFO: Pod "alpine-nnp-false-6c624cce-d4d2-4240-bbbe-8267dca43413": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01388191s May 10 21:14:38.395: INFO: Pod "alpine-nnp-false-6c624cce-d4d2-4240-bbbe-8267dca43413" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:38.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-548" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":420,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:38.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-91f4099e-3a26-4bf1-a64e-471a5ebac618 STEP: Creating a pod to test consume secrets May 10 21:14:38.531: INFO: Waiting up to 5m0s for pod "pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310" in namespace "secrets-5682" to be "success or failure" May 10 21:14:38.540: INFO: Pod "pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310": Phase="Pending", Reason="", readiness=false. Elapsed: 9.21973ms May 10 21:14:40.544: INFO: Pod "pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013112231s May 10 21:14:42.549: INFO: Pod "pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018292987s STEP: Saw pod success May 10 21:14:42.549: INFO: Pod "pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310" satisfied condition "success or failure" May 10 21:14:42.552: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310 container secret-volume-test: STEP: delete the pod May 10 21:14:42.585: INFO: Waiting for pod pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310 to disappear May 10 21:14:42.600: INFO: Pod pod-secrets-7006bf5c-d6a4-438b-a8ac-4660f0a07310 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:42.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5682" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:42.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-dc0d3e0d-a222-4ea9-a815-3dbd5afb9abe STEP: Creating a pod to test consume configMaps May 10 21:14:42.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60" in namespace "configmap-3137" to be "success or failure" May 10 21:14:42.698: INFO: Pod "pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637104ms May 10 21:14:44.703: INFO: Pod "pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007083919s May 10 21:14:46.707: INFO: Pod "pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011164125s STEP: Saw pod success May 10 21:14:46.707: INFO: Pod "pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60" satisfied condition "success or failure" May 10 21:14:46.710: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60 container configmap-volume-test: STEP: delete the pod May 10 21:14:46.745: INFO: Waiting for pod pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60 to disappear May 10 21:14:46.833: INFO: Pod pod-configmaps-5ae9696c-b288-4535-8a59-b7b4408b7c60 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:46.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3137" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":488,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:46.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-4aed8336-27ad-4255-b3f6-d24206656822 STEP: Creating a pod to test consume configMaps May 10 21:14:47.004: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7" in namespace "projected-5480" to be "success or failure" May 10 21:14:47.010: INFO: Pod "pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186801ms May 10 21:14:49.013: INFO: Pod "pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009510542s May 10 21:14:51.017: INFO: Pod "pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013563674s STEP: Saw pod success May 10 21:14:51.017: INFO: Pod "pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7" satisfied condition "success or failure" May 10 21:14:51.020: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7 container projected-configmap-volume-test: STEP: delete the pod May 10 21:14:51.042: INFO: Waiting for pod pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7 to disappear May 10 21:14:51.098: INFO: Pod pod-projected-configmaps-43c55ee4-6639-43f6-b5b2-4d7c705309d7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:51.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5480" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:51.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 10 21:14:51.420: INFO: Waiting up to 5m0s for pod "pod-9785b9ac-bd07-433d-8bd3-e6000958d228" in namespace "emptydir-9400" to be "success or failure" May 10 21:14:51.437: INFO: Pod "pod-9785b9ac-bd07-433d-8bd3-e6000958d228": Phase="Pending", Reason="", readiness=false. Elapsed: 16.292849ms May 10 21:14:53.441: INFO: Pod "pod-9785b9ac-bd07-433d-8bd3-e6000958d228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02030406s May 10 21:14:55.445: INFO: Pod "pod-9785b9ac-bd07-433d-8bd3-e6000958d228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024906553s STEP: Saw pod success May 10 21:14:55.445: INFO: Pod "pod-9785b9ac-bd07-433d-8bd3-e6000958d228" satisfied condition "success or failure" May 10 21:14:55.448: INFO: Trying to get logs from node jerma-worker pod pod-9785b9ac-bd07-433d-8bd3-e6000958d228 container test-container: STEP: delete the pod May 10 21:14:55.467: INFO: Waiting for pod pod-9785b9ac-bd07-433d-8bd3-e6000958d228 to disappear May 10 21:14:55.471: INFO: Pod pod-9785b9ac-bd07-433d-8bd3-e6000958d228 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:14:55.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9400" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:14:55.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[] May 10 21:14:55.671: INFO: Get endpoints failed (3.381138ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 10 21:14:56.674: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[] (1.006628277s elapsed) STEP: Creating pod pod1 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[pod1:[100]] May 10 21:15:00.722: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[pod1:[100]] (4.040737302s elapsed) STEP: Creating pod pod2 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[pod1:[100] pod2:[101]] May 10 21:15:04.815: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[pod1:[100] pod2:[101]] (4.08952494s elapsed) STEP: Deleting pod pod1 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[pod2:[101]] May 10 21:15:04.847: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[pod2:[101]] (27.365763ms elapsed) STEP: Deleting pod pod2 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[] May 10 21:15:05.862: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[] (1.011813258s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:15:05.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1890" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.504 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":29,"skipped":565,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:15:05.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:15:06.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:15:08.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742106, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742106, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742107, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742106, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:15:12.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:15:12.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2874-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:15:13.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9012" for this suite. STEP: Destroying namespace "webhook-9012-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.303 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":30,"skipped":575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:15:13.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 10 21:15:13.364: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5368" to be "success or failure" May 10 21:15:13.387: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.632803ms May 10 21:15:15.391: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026900944s May 10 21:15:17.395: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030653741s May 10 21:15:19.399: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035186625s STEP: Saw pod success May 10 21:15:19.399: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 10 21:15:19.403: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 10 21:15:19.420: INFO: Waiting for pod pod-host-path-test to disappear May 10 21:15:19.474: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:15:19.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5368" for this suite. • [SLOW TEST:6.196 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:15:19.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 10 21:15:27.686: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 10 21:15:27.698: INFO: Pod pod-with-poststart-exec-hook still exists May 10 21:15:29.698: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 10 21:15:29.703: INFO: Pod pod-with-poststart-exec-hook still exists May 10 21:15:31.698: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 10 21:15:31.702: INFO: Pod pod-with-poststart-exec-hook still exists May 10 21:15:33.698: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 10 21:15:33.702: INFO: Pod pod-with-poststart-exec-hook still exists May 10 21:15:35.698: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 10 21:15:35.702: INFO: Pod pod-with-poststart-exec-hook still exists May 10 21:15:37.698: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 10 21:15:37.703: INFO: Pod pod-with-poststart-exec-hook still exists May 10 21:15:39.698: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 10 21:15:39.703: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:15:39.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9092" for this suite. • [SLOW TEST:20.230 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":640,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:15:39.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-7a9a5798-2b59-4fa5-afd8-4e92f97680d0 in namespace container-probe-5097 May 10 21:15:43.872: INFO: Started pod test-webserver-7a9a5798-2b59-4fa5-afd8-4e92f97680d0 in namespace container-probe-5097 STEP: checking the pod's current state and verifying that restartCount is present May 10 21:15:43.875: INFO: Initial restart count of pod test-webserver-7a9a5798-2b59-4fa5-afd8-4e92f97680d0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:19:44.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5097" for this suite. • [SLOW TEST:244.862 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:19:44.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-788d8b74-a84b-44ef-a1a5-78147416c3a8 STEP: Creating a pod to test consume secrets May 10 21:19:44.665: INFO: Waiting up to 5m0s for pod "pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f" in namespace "secrets-3549" to be "success or failure" May 10 21:19:44.850: INFO: Pod "pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 185.323151ms May 10 21:19:46.928: INFO: Pod "pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262651676s May 10 21:19:48.932: INFO: Pod "pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.267170853s STEP: Saw pod success May 10 21:19:48.932: INFO: Pod "pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f" satisfied condition "success or failure" May 10 21:19:48.936: INFO: Trying to get logs from node jerma-worker pod pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f container secret-volume-test: STEP: delete the pod May 10 21:19:48.972: INFO: Waiting for pod pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f to disappear May 10 21:19:48.976: INFO: Pod pod-secrets-8126549f-9756-41d0-8474-77b11faa1e3f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:19:48.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3549" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":677,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:19:48.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 10 21:19:49.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5888' May 10 21:19:49.146: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 10 21:19:49.146: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 10 21:19:49.209: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-75rlj] May 10 21:19:49.209: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-75rlj" in namespace "kubectl-5888" to be "running and ready" May 10 21:19:49.223: INFO: Pod "e2e-test-httpd-rc-75rlj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.816594ms May 10 21:19:51.227: INFO: Pod "e2e-test-httpd-rc-75rlj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017708537s May 10 21:19:53.231: INFO: Pod "e2e-test-httpd-rc-75rlj": Phase="Running", Reason="", readiness=true. Elapsed: 4.021299023s May 10 21:19:53.231: INFO: Pod "e2e-test-httpd-rc-75rlj" satisfied condition "running and ready" May 10 21:19:53.231: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-75rlj] May 10 21:19:53.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5888' May 10 21:19:53.373: INFO: stderr: "" May 10 21:19:53.373: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.203. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.203. Set the 'ServerName' directive globally to suppress this message\n[Sun May 10 21:19:51.516553 2020] [mpm_event:notice] [pid 1:tid 140545982344040] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun May 10 21:19:51.516605 2020] [core:notice] [pid 1:tid 140545982344040] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 10 21:19:53.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5888' May 10 21:19:53.495: INFO: stderr: "" May 10 21:19:53.495: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:19:53.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5888" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":35,"skipped":693,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:19:53.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e0412282-b07e-4798-a533-169dc35bba80 STEP: Creating a pod to test consume secrets May 10 21:19:53.667: INFO: Waiting up to 5m0s for pod "pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955" in namespace "secrets-5260" to be "success or failure" May 10 21:19:53.690: INFO: Pod "pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955": Phase="Pending", Reason="", readiness=false. Elapsed: 22.349433ms May 10 21:19:55.694: INFO: Pod "pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027010257s May 10 21:19:57.699: INFO: Pod "pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031725482s STEP: Saw pod success May 10 21:19:57.699: INFO: Pod "pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955" satisfied condition "success or failure" May 10 21:19:57.703: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955 container secret-volume-test: STEP: delete the pod May 10 21:19:57.812: INFO: Waiting for pod pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955 to disappear May 10 21:19:57.868: INFO: Pod pod-secrets-3f6fc4c4-f25d-48fa-a0d5-e19db4976955 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:19:57.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5260" for this suite. STEP: Destroying namespace "secret-namespace-9135" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":710,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:19:57.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 10 21:19:57.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3368' May 10 21:19:58.049: INFO: stderr: "" May 10 21:19:58.049: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 10 21:19:58.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3368' May 10 21:20:09.509: INFO: stderr: "" May 10 21:20:09.509: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:20:09.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3368" for this suite. • [SLOW TEST:11.622 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":37,"skipped":718,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:20:09.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:20:09.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09" in namespace "downward-api-1521" to be "success or failure" May 10 21:20:09.618: INFO: Pod "downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09": Phase="Pending", Reason="", readiness=false. Elapsed: 10.239474ms May 10 21:20:11.623: INFO: Pod "downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015004898s May 10 21:20:13.636: INFO: Pod "downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028104955s STEP: Saw pod success May 10 21:20:13.636: INFO: Pod "downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09" satisfied condition "success or failure" May 10 21:20:13.638: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09 container client-container: STEP: delete the pod May 10 21:20:13.670: INFO: Waiting for pod downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09 to disappear May 10 21:20:13.689: INFO: Pod downwardapi-volume-dc6c37aa-7de0-41c4-9c3c-4fb0c77fdd09 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:20:13.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1521" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":723,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:20:13.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:20:17.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2605" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":736,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:20:17.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:20:17.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a" in namespace "projected-1736" to be "success or failure" May 10 21:20:17.940: INFO: Pod "downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.093704ms May 10 21:20:19.946: INFO: Pod "downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032758411s May 10 21:20:21.988: INFO: Pod "downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074391907s STEP: Saw pod success May 10 21:20:21.988: INFO: Pod "downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a" satisfied condition "success or failure" May 10 21:20:21.991: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a container client-container: STEP: delete the pod May 10 21:20:22.182: INFO: Waiting for pod downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a to disappear May 10 21:20:22.199: INFO: Pod downwardapi-volume-7662bb96-460f-4c84-a9ad-b49a569afe5a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:20:22.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1736" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":743,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:20:22.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-f416a877-0f5e-470a-96ad-2dc25302cea2 in namespace container-probe-529 May 10 21:20:26.340: INFO: Started pod busybox-f416a877-0f5e-470a-96ad-2dc25302cea2 in namespace container-probe-529 STEP: checking the pod's current state and verifying that restartCount is present May 10 21:20:26.344: INFO: Initial restart count of pod busybox-f416a877-0f5e-470a-96ad-2dc25302cea2 is 0 May 10 21:21:14.663: INFO: Restart count of pod container-probe-529/busybox-f416a877-0f5e-470a-96ad-2dc25302cea2 is now 1 (48.3195942s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:21:14.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-529" for this suite. • [SLOW TEST:52.506 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":764,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:21:14.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:21:45.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8548" for this suite. • [SLOW TEST:30.547 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:21:45.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 10 21:21:49.907: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8252 pod-service-account-b33eab62-71cd-49f8-bd8b-c6a0e3ed9420 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 10 21:21:50.162: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8252 pod-service-account-b33eab62-71cd-49f8-bd8b-c6a0e3ed9420 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 10 21:21:50.344: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8252 pod-service-account-b33eab62-71cd-49f8-bd8b-c6a0e3ed9420 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:21:50.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8252" for this suite. • [SLOW TEST:5.334 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":43,"skipped":801,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:21:50.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4608/secret-test-537b9bfe-134a-4cbb-ac99-ecd764418fd3 STEP: Creating a pod to test consume secrets May 10 21:21:50.708: INFO: Waiting up to 5m0s for pod "pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95" in namespace "secrets-4608" to be "success or failure" May 10 21:21:50.713: INFO: Pod "pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95": Phase="Pending", Reason="", readiness=false. Elapsed: 5.810633ms May 10 21:21:52.762: INFO: Pod "pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054388701s May 10 21:21:54.767: INFO: Pod "pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059310318s STEP: Saw pod success May 10 21:21:54.767: INFO: Pod "pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95" satisfied condition "success or failure" May 10 21:21:54.770: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95 container env-test: STEP: delete the pod May 10 21:21:54.846: INFO: Waiting for pod pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95 to disappear May 10 21:21:54.883: INFO: Pod pod-configmaps-31fc6af7-21e6-4a89-a297-ca52f4063a95 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:21:54.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4608" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:21:54.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:22:54.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4433" for this suite. • [SLOW TEST:60.066 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":850,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:22:54.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:22:55.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c" in namespace "downward-api-2614" to be "success or failure" May 10 21:22:55.076: INFO: Pod "downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.354125ms May 10 21:22:57.080: INFO: Pod "downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015607958s May 10 21:22:59.098: INFO: Pod "downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033714873s STEP: Saw pod success May 10 21:22:59.098: INFO: Pod "downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c" satisfied condition "success or failure" May 10 21:22:59.101: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c container client-container: STEP: delete the pod May 10 21:22:59.144: INFO: Waiting for pod downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c to disappear May 10 21:22:59.334: INFO: Pod downwardapi-volume-dc3845a4-a421-4420-b85a-7eb2d9ff081c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:22:59.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2614" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":852,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:22:59.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:22:59.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9077" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":47,"skipped":862,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:22:59.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:22:59.618: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572" in namespace "downward-api-6494" to be "success or failure" May 10 21:22:59.634: INFO: Pod "downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572": Phase="Pending", Reason="", readiness=false. Elapsed: 16.545675ms May 10 21:23:01.638: INFO: Pod "downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020847231s May 10 21:23:03.643: INFO: Pod "downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572": Phase="Running", Reason="", readiness=true. Elapsed: 4.025172454s May 10 21:23:05.648: INFO: Pod "downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030544451s STEP: Saw pod success May 10 21:23:05.648: INFO: Pod "downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572" satisfied condition "success or failure" May 10 21:23:05.651: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572 container client-container: STEP: delete the pod May 10 21:23:05.686: INFO: Waiting for pod downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572 to disappear May 10 21:23:05.699: INFO: Pod downwardapi-volume-9e2e7a91-d231-4823-b2b2-fbd9cab3d572 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:23:05.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6494" for this suite. • [SLOW TEST:6.157 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":865,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:23:05.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 10 21:23:05.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2044' May 10 21:23:05.856: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 10 21:23:05.856: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 10 21:23:07.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2044' May 10 21:23:08.082: INFO: stderr: "" May 10 21:23:08.082: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:23:08.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2044" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":49,"skipped":871,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:23:08.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 10 21:23:08.262: INFO: Waiting up to 5m0s for pod "pod-dafaa166-d186-4496-9a30-859b0c855dc7" in namespace "emptydir-7786" to be "success or failure" May 10 21:23:08.280: INFO: Pod "pod-dafaa166-d186-4496-9a30-859b0c855dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.738979ms May 10 21:23:10.284: INFO: Pod "pod-dafaa166-d186-4496-9a30-859b0c855dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022100429s May 10 21:23:12.288: INFO: Pod "pod-dafaa166-d186-4496-9a30-859b0c855dc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026482817s STEP: Saw pod success May 10 21:23:12.288: INFO: Pod "pod-dafaa166-d186-4496-9a30-859b0c855dc7" satisfied condition "success or failure" May 10 21:23:12.291: INFO: Trying to get logs from node jerma-worker2 pod pod-dafaa166-d186-4496-9a30-859b0c855dc7 container test-container: STEP: delete the pod May 10 21:23:12.429: INFO: Waiting for pod pod-dafaa166-d186-4496-9a30-859b0c855dc7 to disappear May 10 21:23:12.436: INFO: Pod pod-dafaa166-d186-4496-9a30-859b0c855dc7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:23:12.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7786" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":883,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:23:12.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-11f159e5-eef1-45a0-b9cd-15743916ed1e in namespace container-probe-5814 May 10 21:23:16.627: INFO: Started pod liveness-11f159e5-eef1-45a0-b9cd-15743916ed1e in namespace container-probe-5814 STEP: checking the pod's current state and verifying that restartCount is present May 10 21:23:16.631: INFO: Initial restart count of pod liveness-11f159e5-eef1-45a0-b9cd-15743916ed1e is 0 May 10 21:23:42.687: INFO: Restart count of pod container-probe-5814/liveness-11f159e5-eef1-45a0-b9cd-15743916ed1e is now 1 (26.056444791s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:23:42.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5814" for this suite. • [SLOW TEST:30.317 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":890,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:23:42.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-fd75d32e-fa19-4be7-b9c8-bd47ce85dbc7 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-fd75d32e-fa19-4be7-b9c8-bd47ce85dbc7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:23:49.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2655" for this suite. • [SLOW TEST:6.866 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":895,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:23:49.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f598a366-b3f3-491a-82ed-cd637d37d327 STEP: Creating a pod to test consume secrets May 10 21:23:49.727: INFO: Waiting up to 5m0s for pod "pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99" in namespace "secrets-1933" to be "success or failure" May 10 21:23:49.753: INFO: Pod "pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99": Phase="Pending", Reason="", readiness=false. Elapsed: 25.553515ms May 10 21:23:51.756: INFO: Pod "pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028535336s May 10 21:23:53.760: INFO: Pod "pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032455712s STEP: Saw pod success May 10 21:23:53.760: INFO: Pod "pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99" satisfied condition "success or failure" May 10 21:23:53.763: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99 container secret-volume-test: STEP: delete the pod May 10 21:23:53.785: INFO: Waiting for pod pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99 to disappear May 10 21:23:53.847: INFO: Pod pod-secrets-34a8d0db-2d51-43bf-b506-a4814d5bfe99 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:23:53.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1933" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":903,"failed":0} SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:23:53.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-c970b935-9698-4582-9138-75e7e43a5826 STEP: Creating secret with name s-test-opt-upd-845b1396-08fd-4843-83bc-12c05e4e0e38 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c970b935-9698-4582-9138-75e7e43a5826 STEP: Updating secret s-test-opt-upd-845b1396-08fd-4843-83bc-12c05e4e0e38 STEP: Creating secret with name s-test-opt-create-948956a2-97a1-4d29-ad6a-03ae83075398 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:25:26.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2409" for this suite. • [SLOW TEST:92.644 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":906,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:25:26.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-07b24a47-f156-4dee-9870-97b6203fa8ee STEP: Creating a pod to test consume secrets May 10 21:25:26.566: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80" in namespace "projected-4081" to be "success or failure" May 10 21:25:26.570: INFO: Pod "pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80": Phase="Pending", Reason="", readiness=false. Elapsed: 3.640343ms May 10 21:25:28.574: INFO: Pod "pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007650733s May 10 21:25:30.577: INFO: Pod "pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01134981s STEP: Saw pod success May 10 21:25:30.577: INFO: Pod "pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80" satisfied condition "success or failure" May 10 21:25:30.580: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80 container secret-volume-test: STEP: delete the pod May 10 21:25:30.659: INFO: Waiting for pod pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80 to disappear May 10 21:25:30.665: INFO: Pod pod-projected-secrets-d5a8d016-5af4-4bb7-a063-b096f71a2e80 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:25:30.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4081" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:25:30.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-b97e5e3b-ea3a-4b5b-9938-810de61df608 STEP: Creating a pod to test consume configMaps May 10 21:25:30.745: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388" in namespace "projected-946" to be "success or failure" May 10 21:25:30.750: INFO: Pod "pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528606ms May 10 21:25:32.753: INFO: Pod "pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008201944s May 10 21:25:34.757: INFO: Pod "pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388": Phase="Running", Reason="", readiness=true. Elapsed: 4.012475678s May 10 21:25:36.762: INFO: Pod "pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016496007s STEP: Saw pod success May 10 21:25:36.762: INFO: Pod "pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388" satisfied condition "success or failure" May 10 21:25:36.765: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388 container projected-configmap-volume-test: STEP: delete the pod May 10 21:25:36.793: INFO: Waiting for pod pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388 to disappear May 10 21:25:36.804: INFO: Pod pod-projected-configmaps-57cc1d79-4408-424e-b372-1267f9c30388 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:25:36.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-946" for this suite. • [SLOW TEST:6.160 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":963,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:25:36.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 10 21:25:44.928: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:44.963: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:25:46.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:47.023: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:25:48.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:48.967: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:25:50.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:50.975: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:25:52.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:52.990: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:25:54.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:54.967: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:25:56.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:56.968: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:25:58.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:25:58.969: INFO: Pod pod-with-poststart-http-hook still exists May 10 21:26:00.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 10 21:26:00.967: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:00.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-977" for this suite. • [SLOW TEST:24.144 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":974,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:00.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:26:01.768: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:26:03.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742761, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742761, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742761, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742761, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:26:06.834: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:07.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9298" for this suite. STEP: Destroying namespace "webhook-9298-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.646 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":58,"skipped":978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:07.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:26:08.530: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:26:10.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742768, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742768, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742768, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724742768, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:26:13.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:13.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5327" for this suite. STEP: Destroying namespace "webhook-5327-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.256 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":59,"skipped":1001,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:13.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 10 21:26:13.945: INFO: >>> kubeConfig: /root/.kube/config May 10 21:26:15.993: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:26.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5650" for this suite. • [SLOW TEST:12.650 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":60,"skipped":1033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:26.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9275.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9275.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:26:32.675: INFO: DNS probes using dns-9275/dns-test-9a23e030-fe9e-4024-a95f-1ceb86181601 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:32.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9275" for this suite. • [SLOW TEST:6.241 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":61,"skipped":1057,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:32.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 10 21:26:37.525: INFO: Successfully updated pod "pod-update-097bb51f-c792-4328-8c58-99e4bec44cfd" STEP: verifying the updated pod is in kubernetes May 10 21:26:37.550: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:37.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2774" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1059,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:37.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:26:37.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01" in namespace "projected-5284" to be "success or failure" May 10 21:26:37.651: INFO: Pod "downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45777ms May 10 21:26:39.694: INFO: Pod "downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046576747s May 10 21:26:41.712: INFO: Pod "downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0642992s STEP: Saw pod success May 10 21:26:41.712: INFO: Pod "downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01" satisfied condition "success or failure" May 10 21:26:41.714: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01 container client-container: STEP: delete the pod May 10 21:26:41.730: INFO: Waiting for pod downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01 to disappear May 10 21:26:41.734: INFO: Pod downwardapi-volume-edea605f-ce2d-4582-9dd2-79c8ff027b01 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:41.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5284" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1063,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:41.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 10 21:26:41.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1328' May 10 21:26:45.433: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 10 21:26:45.433: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 10 21:26:45.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1328' May 10 21:26:45.619: INFO: stderr: "" May 10 21:26:45.619: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:45.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1328" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":64,"skipped":1075,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:45.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 10 21:26:45.760: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5002 /api/v1/namespaces/watch-5002/configmaps/e2e-watch-test-resource-version 2159ebd3-8891-4475-a28f-7a205cd465fa 15062724 0 2020-05-10 21:26:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 10 21:26:45.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5002 /api/v1/namespaces/watch-5002/configmaps/e2e-watch-test-resource-version 2159ebd3-8891-4475-a28f-7a205cd465fa 15062725 0 2020-05-10 21:26:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:45.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5002" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":65,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:45.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-8279 STEP: creating replication controller nodeport-test in namespace services-8279 I0510 21:26:45.907832 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-8279, replica count: 2 I0510 21:26:48.958286 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 21:26:51.958742 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 21:26:51.958: INFO: Creating new exec pod May 10 21:26:57.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8279 execpod2fw5l -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 10 21:26:57.326: INFO: stderr: "I0510 21:26:57.221789 473 log.go:172] (0xc000ad0420) (0xc00044dd60) Create stream\nI0510 21:26:57.222061 473 log.go:172] (0xc000ad0420) (0xc00044dd60) Stream added, broadcasting: 1\nI0510 21:26:57.226356 473 log.go:172] (0xc000ad0420) Reply frame received for 1\nI0510 21:26:57.226405 473 log.go:172] (0xc000ad0420) (0xc000982000) Create stream\nI0510 21:26:57.226424 473 log.go:172] (0xc000ad0420) (0xc000982000) Stream added, broadcasting: 3\nI0510 21:26:57.227132 473 log.go:172] (0xc000ad0420) Reply frame received for 3\nI0510 21:26:57.227170 473 log.go:172] (0xc000ad0420) (0xc00044de00) Create stream\nI0510 21:26:57.227182 473 log.go:172] (0xc000ad0420) (0xc00044de00) Stream added, broadcasting: 5\nI0510 21:26:57.227800 473 log.go:172] (0xc000ad0420) Reply frame received for 5\nI0510 21:26:57.317805 473 log.go:172] (0xc000ad0420) Data frame received for 5\nI0510 21:26:57.317828 473 log.go:172] (0xc00044de00) (5) Data frame handling\nI0510 21:26:57.317840 473 log.go:172] (0xc00044de00) (5) Data frame sent\nI0510 21:26:57.317845 473 log.go:172] (0xc000ad0420) Data frame received for 5\nI0510 21:26:57.317849 473 log.go:172] (0xc00044de00) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0510 21:26:57.317866 473 log.go:172] (0xc00044de00) (5) Data frame sent\nI0510 21:26:57.318180 473 log.go:172] (0xc000ad0420) Data frame received for 3\nI0510 21:26:57.318215 473 log.go:172] (0xc000982000) (3) Data frame handling\nI0510 21:26:57.318439 473 log.go:172] (0xc000ad0420) Data frame received for 5\nI0510 21:26:57.318464 473 log.go:172] (0xc00044de00) (5) Data frame handling\nI0510 21:26:57.320193 473 log.go:172] (0xc000ad0420) Data frame received for 1\nI0510 21:26:57.320209 473 log.go:172] (0xc00044dd60) (1) Data frame handling\nI0510 21:26:57.320229 473 log.go:172] (0xc00044dd60) (1) Data frame sent\nI0510 21:26:57.320241 473 log.go:172] (0xc000ad0420) (0xc00044dd60) Stream removed, broadcasting: 1\nI0510 21:26:57.320744 473 log.go:172] (0xc000ad0420) Go away received\nI0510 21:26:57.321001 473 log.go:172] (0xc000ad0420) (0xc00044dd60) Stream removed, broadcasting: 1\nI0510 21:26:57.321021 473 log.go:172] (0xc000ad0420) (0xc000982000) Stream removed, broadcasting: 3\nI0510 21:26:57.321040 473 log.go:172] (0xc000ad0420) (0xc00044de00) Stream removed, broadcasting: 5\n" May 10 21:26:57.326: INFO: stdout: "" May 10 21:26:57.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8279 execpod2fw5l -- /bin/sh -x -c nc -zv -t -w 2 10.106.112.246 80' May 10 21:26:57.509: INFO: stderr: "I0510 21:26:57.450116 494 log.go:172] (0xc00010b340) (0xc0005461e0) Create stream\nI0510 21:26:57.450176 494 log.go:172] (0xc00010b340) (0xc0005461e0) Stream added, broadcasting: 1\nI0510 21:26:57.452482 494 log.go:172] (0xc00010b340) Reply frame received for 1\nI0510 21:26:57.452519 494 log.go:172] (0xc00010b340) (0xc0002375e0) Create stream\nI0510 21:26:57.452532 494 log.go:172] (0xc00010b340) (0xc0002375e0) Stream added, broadcasting: 3\nI0510 21:26:57.453446 494 log.go:172] (0xc00010b340) Reply frame received for 3\nI0510 21:26:57.453473 494 log.go:172] (0xc00010b340) (0xc000546280) Create stream\nI0510 21:26:57.453481 494 log.go:172] (0xc00010b340) (0xc000546280) Stream added, broadcasting: 5\nI0510 21:26:57.454136 494 log.go:172] (0xc00010b340) Reply frame received for 5\nI0510 21:26:57.504732 494 log.go:172] (0xc00010b340) Data frame received for 3\nI0510 21:26:57.504758 494 log.go:172] (0xc0002375e0) (3) Data frame handling\nI0510 21:26:57.504777 494 log.go:172] (0xc00010b340) Data frame received for 5\nI0510 21:26:57.504786 494 log.go:172] (0xc000546280) (5) Data frame handling\nI0510 21:26:57.504795 494 log.go:172] (0xc000546280) (5) Data frame sent\nI0510 21:26:57.504803 494 log.go:172] (0xc00010b340) Data frame received for 5\nI0510 21:26:57.504808 494 log.go:172] (0xc000546280) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.112.246 80\nConnection to 10.106.112.246 80 port [tcp/http] succeeded!\nI0510 21:26:57.506021 494 log.go:172] (0xc00010b340) Data frame received for 1\nI0510 21:26:57.506037 494 log.go:172] (0xc0005461e0) (1) Data frame handling\nI0510 21:26:57.506047 494 log.go:172] (0xc0005461e0) (1) Data frame sent\nI0510 21:26:57.506056 494 log.go:172] (0xc00010b340) (0xc0005461e0) Stream removed, broadcasting: 1\nI0510 21:26:57.506067 494 log.go:172] (0xc00010b340) Go away received\nI0510 21:26:57.506343 494 log.go:172] (0xc00010b340) (0xc0005461e0) Stream removed, broadcasting: 1\nI0510 21:26:57.506356 494 log.go:172] (0xc00010b340) (0xc0002375e0) Stream removed, broadcasting: 3\nI0510 21:26:57.506364 494 log.go:172] (0xc00010b340) (0xc000546280) Stream removed, broadcasting: 5\n" May 10 21:26:57.509: INFO: stdout: "" May 10 21:26:57.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8279 execpod2fw5l -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30061' May 10 21:26:57.711: INFO: stderr: "I0510 21:26:57.629856 514 log.go:172] (0xc000a6f4a0) (0xc00098e6e0) Create stream\nI0510 21:26:57.629919 514 log.go:172] (0xc000a6f4a0) (0xc00098e6e0) Stream added, broadcasting: 1\nI0510 21:26:57.634463 514 log.go:172] (0xc000a6f4a0) Reply frame received for 1\nI0510 21:26:57.634533 514 log.go:172] (0xc000a6f4a0) (0xc0006b26e0) Create stream\nI0510 21:26:57.634571 514 log.go:172] (0xc000a6f4a0) (0xc0006b26e0) Stream added, broadcasting: 3\nI0510 21:26:57.635708 514 log.go:172] (0xc000a6f4a0) Reply frame received for 3\nI0510 21:26:57.635770 514 log.go:172] (0xc000a6f4a0) (0xc0004314a0) Create stream\nI0510 21:26:57.635795 514 log.go:172] (0xc000a6f4a0) (0xc0004314a0) Stream added, broadcasting: 5\nI0510 21:26:57.636800 514 log.go:172] (0xc000a6f4a0) Reply frame received for 5\nI0510 21:26:57.703313 514 log.go:172] (0xc000a6f4a0) Data frame received for 3\nI0510 21:26:57.703341 514 log.go:172] (0xc0006b26e0) (3) Data frame handling\nI0510 21:26:57.703561 514 log.go:172] (0xc000a6f4a0) Data frame received for 5\nI0510 21:26:57.703583 514 log.go:172] (0xc0004314a0) (5) Data frame handling\nI0510 21:26:57.703597 514 log.go:172] (0xc0004314a0) (5) Data frame sent\nI0510 21:26:57.703603 514 log.go:172] (0xc000a6f4a0) Data frame received for 5\nI0510 21:26:57.703608 514 log.go:172] (0xc0004314a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30061\nConnection to 172.17.0.10 30061 port [tcp/30061] succeeded!\nI0510 21:26:57.704955 514 log.go:172] (0xc000a6f4a0) Data frame received for 1\nI0510 21:26:57.704981 514 log.go:172] (0xc00098e6e0) (1) Data frame handling\nI0510 21:26:57.705010 514 log.go:172] (0xc00098e6e0) (1) Data frame sent\nI0510 21:26:57.705030 514 log.go:172] (0xc000a6f4a0) (0xc00098e6e0) Stream removed, broadcasting: 1\nI0510 21:26:57.705052 514 log.go:172] (0xc000a6f4a0) Go away received\nI0510 21:26:57.705730 514 log.go:172] (0xc000a6f4a0) (0xc00098e6e0) Stream removed, broadcasting: 1\nI0510 21:26:57.705763 514 log.go:172] (0xc000a6f4a0) (0xc0006b26e0) Stream removed, broadcasting: 3\nI0510 21:26:57.705781 514 log.go:172] (0xc000a6f4a0) (0xc0004314a0) Stream removed, broadcasting: 5\n" May 10 21:26:57.711: INFO: stdout: "" May 10 21:26:57.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8279 execpod2fw5l -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30061' May 10 21:26:57.945: INFO: stderr: "I0510 21:26:57.859880 534 log.go:172] (0xc000614d10) (0xc0006ba8c0) Create stream\nI0510 21:26:57.859944 534 log.go:172] (0xc000614d10) (0xc0006ba8c0) Stream added, broadcasting: 1\nI0510 21:26:57.862678 534 log.go:172] (0xc000614d10) Reply frame received for 1\nI0510 21:26:57.862722 534 log.go:172] (0xc000614d10) (0xc0004c3680) Create stream\nI0510 21:26:57.862737 534 log.go:172] (0xc000614d10) (0xc0004c3680) Stream added, broadcasting: 3\nI0510 21:26:57.863538 534 log.go:172] (0xc000614d10) Reply frame received for 3\nI0510 21:26:57.863564 534 log.go:172] (0xc000614d10) (0xc0009e4000) Create stream\nI0510 21:26:57.863572 534 log.go:172] (0xc000614d10) (0xc0009e4000) Stream added, broadcasting: 5\nI0510 21:26:57.864454 534 log.go:172] (0xc000614d10) Reply frame received for 5\nI0510 21:26:57.937784 534 log.go:172] (0xc000614d10) Data frame received for 5\nI0510 21:26:57.937841 534 log.go:172] (0xc0009e4000) (5) Data frame handling\nI0510 21:26:57.937868 534 log.go:172] (0xc0009e4000) (5) Data frame sent\nI0510 21:26:57.937885 534 log.go:172] (0xc000614d10) Data frame received for 5\nI0510 21:26:57.937899 534 log.go:172] (0xc0009e4000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30061\nConnection to 172.17.0.8 30061 port [tcp/30061] succeeded!\nI0510 21:26:57.937958 534 log.go:172] (0xc000614d10) Data frame received for 3\nI0510 21:26:57.938003 534 log.go:172] (0xc0004c3680) (3) Data frame handling\nI0510 21:26:57.939318 534 log.go:172] (0xc000614d10) Data frame received for 1\nI0510 21:26:57.939350 534 log.go:172] (0xc0006ba8c0) (1) Data frame handling\nI0510 21:26:57.939379 534 log.go:172] (0xc0006ba8c0) (1) Data frame sent\nI0510 21:26:57.939402 534 log.go:172] (0xc000614d10) (0xc0006ba8c0) Stream removed, broadcasting: 1\nI0510 21:26:57.939427 534 log.go:172] (0xc000614d10) Go away received\nI0510 21:26:57.939775 534 log.go:172] (0xc000614d10) (0xc0006ba8c0) Stream removed, broadcasting: 1\nI0510 21:26:57.939803 534 log.go:172] (0xc000614d10) (0xc0004c3680) Stream removed, broadcasting: 3\nI0510 21:26:57.939819 534 log.go:172] (0xc000614d10) (0xc0009e4000) Stream removed, broadcasting: 5\n" May 10 21:26:57.945: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:26:57.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8279" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.186 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":66,"skipped":1151,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:26:57.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:26:58.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610" in namespace "projected-3244" to be "success or failure" May 10 21:26:58.076: INFO: Pod "downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610": Phase="Pending", Reason="", readiness=false. Elapsed: 14.002445ms May 10 21:27:00.080: INFO: Pod "downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018079053s May 10 21:27:02.084: INFO: Pod "downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022725427s STEP: Saw pod success May 10 21:27:02.085: INFO: Pod "downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610" satisfied condition "success or failure" May 10 21:27:02.088: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610 container client-container: STEP: delete the pod May 10 21:27:02.136: INFO: Waiting for pod downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610 to disappear May 10 21:27:02.148: INFO: Pod downwardapi-volume-c6aa9e18-e617-4255-a924-481806b16610 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:27:02.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3244" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1157,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:27:02.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-01f45856-c42a-4792-872c-c66be8db53fe STEP: Creating a pod to test consume secrets May 10 21:27:02.259: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336" in namespace "projected-4856" to be "success or failure" May 10 21:27:02.263: INFO: Pod "pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336": Phase="Pending", Reason="", readiness=false. Elapsed: 3.987625ms May 10 21:27:04.491: INFO: Pod "pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232549339s May 10 21:27:06.496: INFO: Pod "pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237625729s May 10 21:27:08.501: INFO: Pod "pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.242363446s STEP: Saw pod success May 10 21:27:08.501: INFO: Pod "pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336" satisfied condition "success or failure" May 10 21:27:08.504: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336 container projected-secret-volume-test: STEP: delete the pod May 10 21:27:08.563: INFO: Waiting for pod pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336 to disappear May 10 21:27:08.568: INFO: Pod pod-projected-secrets-591aeb9c-de44-44d3-9129-c61900620336 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:27:08.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4856" for this suite. • [SLOW TEST:6.423 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1174,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:27:08.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:27:21.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-871" for this suite. • [SLOW TEST:13.219 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":69,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:27:21.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:27:21.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7867' May 10 21:27:22.194: INFO: stderr: "" May 10 21:27:22.194: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 10 21:27:22.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7867' May 10 21:27:22.484: INFO: stderr: "" May 10 21:27:22.484: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 10 21:27:23.509: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:27:23.509: INFO: Found 0 / 1 May 10 21:27:24.489: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:27:24.489: INFO: Found 0 / 1 May 10 21:27:25.488: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:27:25.488: INFO: Found 0 / 1 May 10 21:27:26.489: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:27:26.489: INFO: Found 1 / 1 May 10 21:27:26.489: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 10 21:27:26.493: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:27:26.493: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 10 21:27:26.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-pnwrq --namespace=kubectl-7867' May 10 21:27:26.604: INFO: stderr: "" May 10 21:27:26.604: INFO: stdout: "Name: agnhost-master-pnwrq\nNamespace: kubectl-7867\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Sun, 10 May 2020 21:27:22 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.136\nIPs:\n IP: 10.244.2.136\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://265a8eb2f20dcc45d9f5ab2a8c277936dea5a456036bf321ad497b62f982d96e\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 10 May 2020 21:27:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-48qxc (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-48qxc:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-48qxc\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7867/agnhost-master-pnwrq to jerma-worker2\n Normal Pulled 3s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" May 10 21:27:26.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7867' May 10 21:27:26.738: INFO: stderr: "" May 10 21:27:26.739: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7867\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-pnwrq\n" May 10 21:27:26.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7867' May 10 21:27:26.849: INFO: stderr: "" May 10 21:27:26.849: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7867\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.108.105.86\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.136:6379\nSession Affinity: None\nEvents: \n" May 10 21:27:26.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 10 21:27:26.983: INFO: stderr: "" May 10 21:27:26.983: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sun, 10 May 2020 21:27:20 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 10 May 2020 21:23:14 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 10 May 2020 21:23:14 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 10 May 2020 21:23:14 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 10 May 2020 21:23:14 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 56d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 56d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 56d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 10 21:27:26.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7867' May 10 21:27:27.080: INFO: stderr: "" May 10 21:27:27.080: INFO: stdout: "Name: kubectl-7867\nLabels: e2e-framework=kubectl\n e2e-run=2ee887b1-94aa-4a9e-bb1e-b5a00d2c8458\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:27:27.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7867" for this suite. • [SLOW TEST:5.290 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":70,"skipped":1192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:27:27.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:27:27.143: INFO: Creating ReplicaSet my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c May 10 21:27:27.198: INFO: Pod name my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c: Found 0 pods out of 1 May 10 21:27:32.201: INFO: Pod name my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c: Found 1 pods out of 1 May 10 21:27:32.201: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c" is running May 10 21:27:32.203: INFO: Pod "my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c-f4slq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:27:27 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:27:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:27:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:27:27 +0000 UTC Reason: Message:}]) May 10 21:27:32.203: INFO: Trying to dial the pod May 10 21:27:37.214: INFO: Controller my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c: Got expected result from replica 1 [my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c-f4slq]: "my-hostname-basic-4e26b96f-6156-474e-897c-aada30c73d2c-f4slq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:27:37.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5720" for this suite. • [SLOW TEST:10.134 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":71,"skipped":1222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:27:37.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 10 21:27:41.345: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:27:41.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4270" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1260,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:27:41.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e242559a-c307-4adb-9fc0-175f25bd4396 STEP: Creating a pod to test consume secrets May 10 21:27:41.452: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08" in namespace "projected-9952" to be "success or failure" May 10 21:27:41.461: INFO: Pod "pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08": Phase="Pending", Reason="", readiness=false. Elapsed: 9.024377ms May 10 21:27:43.464: INFO: Pod "pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012452524s May 10 21:27:45.468: INFO: Pod "pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016350084s STEP: Saw pod success May 10 21:27:45.468: INFO: Pod "pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08" satisfied condition "success or failure" May 10 21:27:45.471: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08 container projected-secret-volume-test: STEP: delete the pod May 10 21:27:45.492: INFO: Waiting for pod pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08 to disappear May 10 21:27:45.497: INFO: Pod pod-projected-secrets-e2302333-8ab7-4126-9152-623f4a078d08 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:27:45.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9952" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1276,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:27:45.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:27:45.625: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 10 21:27:45.640: INFO: Number of nodes with available pods: 0 May 10 21:27:45.640: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 10 21:27:45.671: INFO: Number of nodes with available pods: 0 May 10 21:27:45.671: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:46.701: INFO: Number of nodes with available pods: 0 May 10 21:27:46.701: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:47.675: INFO: Number of nodes with available pods: 0 May 10 21:27:47.675: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:48.675: INFO: Number of nodes with available pods: 0 May 10 21:27:48.675: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:49.683: INFO: Number of nodes with available pods: 1 May 10 21:27:49.683: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 10 21:27:49.710: INFO: Number of nodes with available pods: 1 May 10 21:27:49.710: INFO: Number of running nodes: 0, number of available pods: 1 May 10 21:27:50.726: INFO: Number of nodes with available pods: 0 May 10 21:27:50.726: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 10 21:27:50.768: INFO: Number of nodes with available pods: 0 May 10 21:27:50.768: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:51.772: INFO: Number of nodes with available pods: 0 May 10 21:27:51.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:52.773: INFO: Number of nodes with available pods: 0 May 10 21:27:52.773: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:53.772: INFO: Number of nodes with available pods: 0 May 10 21:27:53.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:54.773: INFO: Number of nodes with available pods: 0 May 10 21:27:54.773: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:55.771: INFO: Number of nodes with available pods: 0 May 10 21:27:55.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:56.772: INFO: Number of nodes with available pods: 0 May 10 21:27:56.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:57.774: INFO: Number of nodes with available pods: 0 May 10 21:27:57.774: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:58.773: INFO: Number of nodes with available pods: 0 May 10 21:27:58.773: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:27:59.772: INFO: Number of nodes with available pods: 0 May 10 21:27:59.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:28:00.772: INFO: Number of nodes with available pods: 0 May 10 21:28:00.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:28:01.772: INFO: Number of nodes with available pods: 0 May 10 21:28:01.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:28:02.772: INFO: Number of nodes with available pods: 0 May 10 21:28:02.772: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:28:03.772: INFO: Number of nodes with available pods: 1 May 10 21:28:03.772: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5006, will wait for the garbage collector to delete the pods May 10 21:28:03.839: INFO: Deleting DaemonSet.extensions daemon-set took: 7.085378ms May 10 21:28:04.239: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.24244ms May 10 21:28:09.543: INFO: Number of nodes with available pods: 0 May 10 21:28:09.543: INFO: Number of running nodes: 0, number of available pods: 0 May 10 21:28:09.547: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5006/daemonsets","resourceVersion":"15063314"},"items":null} May 10 21:28:09.550: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5006/pods","resourceVersion":"15063314"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:28:09.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5006" for this suite. • [SLOW TEST:24.082 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":74,"skipped":1285,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:28:09.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:28:09.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b" in namespace "projected-9593" to be "success or failure" May 10 21:28:09.695: INFO: Pod "downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093956ms May 10 21:28:11.699: INFO: Pod "downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008072283s May 10 21:28:13.703: INFO: Pod "downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012521884s STEP: Saw pod success May 10 21:28:13.703: INFO: Pod "downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b" satisfied condition "success or failure" May 10 21:28:13.707: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b container client-container: STEP: delete the pod May 10 21:28:13.744: INFO: Waiting for pod downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b to disappear May 10 21:28:13.755: INFO: Pod downwardapi-volume-310c31e6-3985-4546-bf92-16ade54f586b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:28:13.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9593" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1291,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:28:13.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-229d STEP: Creating a pod to test atomic-volume-subpath May 10 21:28:13.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-229d" in namespace "subpath-8702" to be "success or failure" May 10 21:28:13.928: INFO: Pod "pod-subpath-test-secret-229d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237639ms May 10 21:28:16.097: INFO: Pod "pod-subpath-test-secret-229d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174979949s May 10 21:28:18.101: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 4.178329959s May 10 21:28:20.104: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 6.18228841s May 10 21:28:22.108: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 8.186002254s May 10 21:28:24.112: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 10.190195658s May 10 21:28:26.117: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 12.194377925s May 10 21:28:28.121: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 14.198608365s May 10 21:28:30.125: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 16.202916375s May 10 21:28:32.130: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 18.20733441s May 10 21:28:34.134: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 20.212062747s May 10 21:28:36.139: INFO: Pod "pod-subpath-test-secret-229d": Phase="Running", Reason="", readiness=true. Elapsed: 22.216554188s May 10 21:28:38.144: INFO: Pod "pod-subpath-test-secret-229d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.22139337s STEP: Saw pod success May 10 21:28:38.144: INFO: Pod "pod-subpath-test-secret-229d" satisfied condition "success or failure" May 10 21:28:38.147: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-229d container test-container-subpath-secret-229d: STEP: delete the pod May 10 21:28:38.171: INFO: Waiting for pod pod-subpath-test-secret-229d to disappear May 10 21:28:38.188: INFO: Pod pod-subpath-test-secret-229d no longer exists STEP: Deleting pod pod-subpath-test-secret-229d May 10 21:28:38.188: INFO: Deleting pod "pod-subpath-test-secret-229d" in namespace "subpath-8702" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:28:38.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8702" for this suite. • [SLOW TEST:24.433 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":76,"skipped":1308,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:28:38.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2989 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 10 21:28:38.296: INFO: Found 0 stateful pods, waiting for 3 May 10 21:28:48.301: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 10 21:28:48.301: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 10 21:28:48.301: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 10 21:28:58.300: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 10 21:28:58.300: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 10 21:28:58.300: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 10 21:28:58.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2989 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:28:58.557: INFO: stderr: "I0510 21:28:58.437563 705 log.go:172] (0xc0009f0bb0) (0xc00097e280) Create stream\nI0510 21:28:58.437637 705 log.go:172] (0xc0009f0bb0) (0xc00097e280) Stream added, broadcasting: 1\nI0510 21:28:58.440743 705 log.go:172] (0xc0009f0bb0) Reply frame received for 1\nI0510 21:28:58.440803 705 log.go:172] (0xc0009f0bb0) (0xc0008d0000) Create stream\nI0510 21:28:58.440825 705 log.go:172] (0xc0009f0bb0) (0xc0008d0000) Stream added, broadcasting: 3\nI0510 21:28:58.441958 705 log.go:172] (0xc0009f0bb0) Reply frame received for 3\nI0510 21:28:58.442026 705 log.go:172] (0xc0009f0bb0) (0xc00071e280) Create stream\nI0510 21:28:58.442056 705 log.go:172] (0xc0009f0bb0) (0xc00071e280) Stream added, broadcasting: 5\nI0510 21:28:58.442956 705 log.go:172] (0xc0009f0bb0) Reply frame received for 5\nI0510 21:28:58.522597 705 log.go:172] (0xc0009f0bb0) Data frame received for 5\nI0510 21:28:58.522623 705 log.go:172] (0xc00071e280) (5) Data frame handling\nI0510 21:28:58.522640 705 log.go:172] (0xc00071e280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:28:58.549856 705 log.go:172] (0xc0009f0bb0) Data frame received for 3\nI0510 21:28:58.549885 705 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0510 21:28:58.549930 705 log.go:172] (0xc0008d0000) (3) Data frame sent\nI0510 21:28:58.549968 705 log.go:172] (0xc0009f0bb0) Data frame received for 3\nI0510 21:28:58.550114 705 log.go:172] (0xc0008d0000) (3) Data frame handling\nI0510 21:28:58.550287 705 log.go:172] (0xc0009f0bb0) Data frame received for 5\nI0510 21:28:58.550312 705 log.go:172] (0xc00071e280) (5) Data frame handling\nI0510 21:28:58.551950 705 log.go:172] (0xc0009f0bb0) Data frame received for 1\nI0510 21:28:58.551999 705 log.go:172] (0xc00097e280) (1) Data frame handling\nI0510 21:28:58.552026 705 log.go:172] (0xc00097e280) (1) Data frame sent\nI0510 21:28:58.552058 705 log.go:172] (0xc0009f0bb0) (0xc00097e280) Stream removed, broadcasting: 1\nI0510 21:28:58.552089 705 log.go:172] (0xc0009f0bb0) Go away received\nI0510 21:28:58.552507 705 log.go:172] (0xc0009f0bb0) (0xc00097e280) Stream removed, broadcasting: 1\nI0510 21:28:58.552533 705 log.go:172] (0xc0009f0bb0) (0xc0008d0000) Stream removed, broadcasting: 3\nI0510 21:28:58.552544 705 log.go:172] (0xc0009f0bb0) (0xc00071e280) Stream removed, broadcasting: 5\n" May 10 21:28:58.558: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:28:58.558: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 10 21:29:08.591: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 10 21:29:18.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2989 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:29:18.851: INFO: stderr: "I0510 21:29:18.763910 725 log.go:172] (0xc000107130) (0xc0008f41e0) Create stream\nI0510 21:29:18.763969 725 log.go:172] (0xc000107130) (0xc0008f41e0) Stream added, broadcasting: 1\nI0510 21:29:18.766831 725 log.go:172] (0xc000107130) Reply frame received for 1\nI0510 21:29:18.766884 725 log.go:172] (0xc000107130) (0xc0008f4280) Create stream\nI0510 21:29:18.766914 725 log.go:172] (0xc000107130) (0xc0008f4280) Stream added, broadcasting: 3\nI0510 21:29:18.768078 725 log.go:172] (0xc000107130) Reply frame received for 3\nI0510 21:29:18.768114 725 log.go:172] (0xc000107130) (0xc000717360) Create stream\nI0510 21:29:18.768128 725 log.go:172] (0xc000107130) (0xc000717360) Stream added, broadcasting: 5\nI0510 21:29:18.769598 725 log.go:172] (0xc000107130) Reply frame received for 5\nI0510 21:29:18.845927 725 log.go:172] (0xc000107130) Data frame received for 5\nI0510 21:29:18.845979 725 log.go:172] (0xc000717360) (5) Data frame handling\nI0510 21:29:18.845998 725 log.go:172] (0xc000717360) (5) Data frame sent\nI0510 21:29:18.846016 725 log.go:172] (0xc000107130) Data frame received for 5\nI0510 21:29:18.846024 725 log.go:172] (0xc000717360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 21:29:18.846063 725 log.go:172] (0xc000107130) Data frame received for 3\nI0510 21:29:18.846096 725 log.go:172] (0xc0008f4280) (3) Data frame handling\nI0510 21:29:18.846138 725 log.go:172] (0xc0008f4280) (3) Data frame sent\nI0510 21:29:18.846172 725 log.go:172] (0xc000107130) Data frame received for 3\nI0510 21:29:18.846208 725 log.go:172] (0xc0008f4280) (3) Data frame handling\nI0510 21:29:18.847499 725 log.go:172] (0xc000107130) Data frame received for 1\nI0510 21:29:18.847520 725 log.go:172] (0xc0008f41e0) (1) Data frame handling\nI0510 21:29:18.847541 725 log.go:172] (0xc0008f41e0) (1) Data frame sent\nI0510 21:29:18.847596 725 log.go:172] (0xc000107130) (0xc0008f41e0) Stream removed, broadcasting: 1\nI0510 21:29:18.847696 725 log.go:172] (0xc000107130) Go away received\nI0510 21:29:18.848086 725 log.go:172] (0xc000107130) (0xc0008f41e0) Stream removed, broadcasting: 1\nI0510 21:29:18.848109 725 log.go:172] (0xc000107130) (0xc0008f4280) Stream removed, broadcasting: 3\nI0510 21:29:18.848129 725 log.go:172] (0xc000107130) (0xc000717360) Stream removed, broadcasting: 5\n" May 10 21:29:18.852: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:29:18.852: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:29:48.872: INFO: Waiting for StatefulSet statefulset-2989/ss2 to complete update May 10 21:29:48.873: INFO: Waiting for Pod statefulset-2989/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 10 21:29:58.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2989 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:29:59.149: INFO: stderr: "I0510 21:29:59.020822 747 log.go:172] (0xc0009db340) (0xc0009ae780) Create stream\nI0510 21:29:59.020884 747 log.go:172] (0xc0009db340) (0xc0009ae780) Stream added, broadcasting: 1\nI0510 21:29:59.026815 747 log.go:172] (0xc0009db340) Reply frame received for 1\nI0510 21:29:59.026871 747 log.go:172] (0xc0009db340) (0xc00059e640) Create stream\nI0510 21:29:59.026890 747 log.go:172] (0xc0009db340) (0xc00059e640) Stream added, broadcasting: 3\nI0510 21:29:59.027939 747 log.go:172] (0xc0009db340) Reply frame received for 3\nI0510 21:29:59.027978 747 log.go:172] (0xc0009db340) (0xc000755360) Create stream\nI0510 21:29:59.027993 747 log.go:172] (0xc0009db340) (0xc000755360) Stream added, broadcasting: 5\nI0510 21:29:59.028929 747 log.go:172] (0xc0009db340) Reply frame received for 5\nI0510 21:29:59.105816 747 log.go:172] (0xc0009db340) Data frame received for 5\nI0510 21:29:59.105847 747 log.go:172] (0xc000755360) (5) Data frame handling\nI0510 21:29:59.105867 747 log.go:172] (0xc000755360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:29:59.140516 747 log.go:172] (0xc0009db340) Data frame received for 3\nI0510 21:29:59.140564 747 log.go:172] (0xc00059e640) (3) Data frame handling\nI0510 21:29:59.140584 747 log.go:172] (0xc00059e640) (3) Data frame sent\nI0510 21:29:59.140600 747 log.go:172] (0xc0009db340) Data frame received for 3\nI0510 21:29:59.140614 747 log.go:172] (0xc00059e640) (3) Data frame handling\nI0510 21:29:59.140642 747 log.go:172] (0xc0009db340) Data frame received for 5\nI0510 21:29:59.140660 747 log.go:172] (0xc000755360) (5) Data frame handling\nI0510 21:29:59.142971 747 log.go:172] (0xc0009db340) Data frame received for 1\nI0510 21:29:59.142991 747 log.go:172] (0xc0009ae780) (1) Data frame handling\nI0510 21:29:59.143006 747 log.go:172] (0xc0009ae780) (1) Data frame sent\nI0510 21:29:59.143020 747 log.go:172] (0xc0009db340) (0xc0009ae780) Stream removed, broadcasting: 1\nI0510 21:29:59.143035 747 log.go:172] (0xc0009db340) Go away received\nI0510 21:29:59.143609 747 log.go:172] (0xc0009db340) (0xc0009ae780) Stream removed, broadcasting: 1\nI0510 21:29:59.143634 747 log.go:172] (0xc0009db340) (0xc00059e640) Stream removed, broadcasting: 3\nI0510 21:29:59.143648 747 log.go:172] (0xc0009db340) (0xc000755360) Stream removed, broadcasting: 5\n" May 10 21:29:59.150: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:29:59.150: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:30:09.185: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 10 21:30:19.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2989 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:30:19.568: INFO: stderr: "I0510 21:30:19.469457 769 log.go:172] (0xc000908d10) (0xc000637ea0) Create stream\nI0510 21:30:19.469519 769 log.go:172] (0xc000908d10) (0xc000637ea0) Stream added, broadcasting: 1\nI0510 21:30:19.472350 769 log.go:172] (0xc000908d10) Reply frame received for 1\nI0510 21:30:19.472391 769 log.go:172] (0xc000908d10) (0xc000637f40) Create stream\nI0510 21:30:19.472402 769 log.go:172] (0xc000908d10) (0xc000637f40) Stream added, broadcasting: 3\nI0510 21:30:19.473568 769 log.go:172] (0xc000908d10) Reply frame received for 3\nI0510 21:30:19.473623 769 log.go:172] (0xc000908d10) (0xc000a4c000) Create stream\nI0510 21:30:19.473634 769 log.go:172] (0xc000908d10) (0xc000a4c000) Stream added, broadcasting: 5\nI0510 21:30:19.474919 769 log.go:172] (0xc000908d10) Reply frame received for 5\nI0510 21:30:19.562011 769 log.go:172] (0xc000908d10) Data frame received for 3\nI0510 21:30:19.562044 769 log.go:172] (0xc000908d10) Data frame received for 5\nI0510 21:30:19.562069 769 log.go:172] (0xc000637f40) (3) Data frame handling\nI0510 21:30:19.562093 769 log.go:172] (0xc000637f40) (3) Data frame sent\nI0510 21:30:19.562112 769 log.go:172] (0xc000a4c000) (5) Data frame handling\nI0510 21:30:19.562119 769 log.go:172] (0xc000a4c000) (5) Data frame sent\nI0510 21:30:19.562125 769 log.go:172] (0xc000908d10) Data frame received for 5\nI0510 21:30:19.562132 769 log.go:172] (0xc000a4c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 21:30:19.562612 769 log.go:172] (0xc000908d10) Data frame received for 3\nI0510 21:30:19.562675 769 log.go:172] (0xc000637f40) (3) Data frame handling\nI0510 21:30:19.564095 769 log.go:172] (0xc000908d10) Data frame received for 1\nI0510 21:30:19.564118 769 log.go:172] (0xc000637ea0) (1) Data frame handling\nI0510 21:30:19.564144 769 log.go:172] (0xc000637ea0) (1) Data frame sent\nI0510 21:30:19.564235 769 log.go:172] (0xc000908d10) (0xc000637ea0) Stream removed, broadcasting: 1\nI0510 21:30:19.564296 769 log.go:172] (0xc000908d10) Go away received\nI0510 21:30:19.564458 769 log.go:172] (0xc000908d10) (0xc000637ea0) Stream removed, broadcasting: 1\nI0510 21:30:19.564469 769 log.go:172] (0xc000908d10) (0xc000637f40) Stream removed, broadcasting: 3\nI0510 21:30:19.564475 769 log.go:172] (0xc000908d10) (0xc000a4c000) Stream removed, broadcasting: 5\n" May 10 21:30:19.568: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:30:19.568: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:30:29.645: INFO: Waiting for StatefulSet statefulset-2989/ss2 to complete update May 10 21:30:29.645: INFO: Waiting for Pod statefulset-2989/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 10 21:30:29.645: INFO: Waiting for Pod statefulset-2989/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 10 21:30:39.653: INFO: Waiting for StatefulSet statefulset-2989/ss2 to complete update May 10 21:30:39.653: INFO: Waiting for Pod statefulset-2989/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 10 21:30:49.651: INFO: Waiting for StatefulSet statefulset-2989/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 10 21:30:59.652: INFO: Deleting all statefulset in ns statefulset-2989 May 10 21:30:59.655: INFO: Scaling statefulset ss2 to 0 May 10 21:31:19.675: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:31:19.677: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:31:19.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2989" for this suite. • [SLOW TEST:161.506 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":77,"skipped":1310,"failed":0} [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:31:19.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-b1cce3f3-c715-4eff-9605-b09e0a079325 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:31:19.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4088" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":78,"skipped":1310,"failed":0} ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:31:19.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 10 21:31:24.448: INFO: Successfully updated pod "annotationupdatefa317b91-5d18-486b-b2f8-98970d32ed8e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:31:26.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8882" for this suite. • [SLOW TEST:6.672 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1310,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:31:26.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2wh9c in namespace proxy-10 I0510 21:31:26.608807 6 runners.go:189] Created replication controller with name: proxy-service-2wh9c, namespace: proxy-10, replica count: 1 I0510 21:31:27.659240 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 21:31:28.659485 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 21:31:29.659740 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 21:31:30.659982 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0510 21:31:31.660225 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0510 21:31:32.660444 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0510 21:31:33.660627 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0510 21:31:34.660814 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0510 21:31:35.661006 6 runners.go:189] proxy-service-2wh9c Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 21:31:35.665: INFO: setup took 9.132958301s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 10 21:31:35.673: INFO: (0) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 7.975579ms) May 10 21:31:35.673: INFO: (0) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 7.571958ms) May 10 21:31:35.673: INFO: (0) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 8.001544ms) May 10 21:31:35.674: INFO: (0) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 8.082286ms) May 10 21:31:35.674: INFO: (0) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 8.588891ms) May 10 21:31:35.674: INFO: (0) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 8.311624ms) May 10 21:31:35.675: INFO: (0) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 9.839844ms) May 10 21:31:35.676: INFO: (0) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 10.56887ms) May 10 21:31:35.677: INFO: (0) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 11.711654ms) May 10 21:31:35.678: INFO: (0) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testte... (200; 13.115052ms) May 10 21:31:35.682: INFO: (0) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 16.683508ms) May 10 21:31:35.682: INFO: (0) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 16.859455ms) May 10 21:31:35.683: INFO: (0) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 17.32157ms) May 10 21:31:35.683: INFO: (0) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname2/proxy/: tls qux (200; 17.457417ms) May 10 21:31:35.683: INFO: (0) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: testte... (200; 5.23991ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 5.312887ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 5.615972ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 5.581869ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 5.852341ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 5.819976ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 5.852863ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname2/proxy/: tls qux (200; 5.827708ms) May 10 21:31:35.689: INFO: (1) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 5.926105ms) May 10 21:31:35.693: INFO: (2) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:1080/proxy/: te... (200; 3.389213ms) May 10 21:31:35.693: INFO: (2) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 3.723206ms) May 10 21:31:35.694: INFO: (2) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: test (200; 5.370972ms) May 10 21:31:35.695: INFO: (2) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 5.379527ms) May 10 21:31:35.695: INFO: (2) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 5.401382ms) May 10 21:31:35.695: INFO: (2) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 5.519186ms) May 10 21:31:35.695: INFO: (2) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testtest (200; 31.435681ms) May 10 21:31:35.729: INFO: (3) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 31.83166ms) May 10 21:31:35.729: INFO: (3) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 32.928093ms) May 10 21:31:35.730: INFO: (3) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:1080/proxy/: te... (200; 33.051451ms) May 10 21:31:35.731: INFO: (3) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testtest (200; 6.539894ms) May 10 21:31:35.739: INFO: (4) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 6.768723ms) May 10 21:31:35.739: INFO: (4) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 7.064294ms) May 10 21:31:35.739: INFO: (4) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testte... (200; 7.28757ms) May 10 21:31:35.739: INFO: (4) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 7.450395ms) May 10 21:31:35.739: INFO: (4) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 7.468075ms) May 10 21:31:35.739: INFO: (4) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: testte... (200; 5.163821ms) May 10 21:31:35.746: INFO: (5) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 5.197624ms) May 10 21:31:35.746: INFO: (5) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 5.327785ms) May 10 21:31:35.746: INFO: (5) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 5.196039ms) May 10 21:31:35.746: INFO: (5) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 5.228925ms) May 10 21:31:35.746: INFO: (5) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: testte... (200; 4.225832ms) May 10 21:31:35.751: INFO: (6) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: test (200; 4.561604ms) May 10 21:31:35.752: INFO: (6) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 5.434233ms) May 10 21:31:35.752: INFO: (6) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 5.525932ms) May 10 21:31:35.752: INFO: (6) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 5.484453ms) May 10 21:31:35.752: INFO: (6) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname2/proxy/: tls qux (200; 5.455755ms) May 10 21:31:35.752: INFO: (6) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 5.581823ms) May 10 21:31:35.752: INFO: (6) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 5.485496ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 3.673279ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testte... (200; 3.665606ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 3.790315ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 4.019786ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 4.001785ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 4.11131ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 3.990912ms) May 10 21:31:35.756: INFO: (7) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: testtest (200; 5.426675ms) May 10 21:31:35.765: INFO: (8) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 5.555097ms) May 10 21:31:35.765: INFO: (8) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: te... (200; 5.650888ms) May 10 21:31:35.765: INFO: (8) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 5.755991ms) May 10 21:31:35.766: INFO: (8) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 5.755501ms) May 10 21:31:35.766: INFO: (8) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 5.803354ms) May 10 21:31:35.766: INFO: (8) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 5.807755ms) May 10 21:31:35.769: INFO: (9) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 3.616321ms) May 10 21:31:35.770: INFO: (9) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:1080/proxy/: te... (200; 4.734126ms) May 10 21:31:35.771: INFO: (9) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 5.084538ms) May 10 21:31:35.771: INFO: (9) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 5.190507ms) May 10 21:31:35.771: INFO: (9) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testtest (200; 3.847973ms) May 10 21:31:35.776: INFO: (10) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 3.910007ms) May 10 21:31:35.776: INFO: (10) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: testte... (200; 3.860816ms) May 10 21:31:35.776: INFO: (10) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 3.953058ms) May 10 21:31:35.776: INFO: (10) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 3.940767ms) May 10 21:31:35.777: INFO: (10) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 4.533828ms) May 10 21:31:35.777: INFO: (10) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 4.626898ms) May 10 21:31:35.777: INFO: (10) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 4.815989ms) May 10 21:31:35.777: INFO: (10) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname2/proxy/: tls qux (200; 4.927499ms) May 10 21:31:35.777: INFO: (10) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 4.926594ms) May 10 21:31:35.777: INFO: (10) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 4.939067ms) May 10 21:31:35.780: INFO: (11) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 2.499338ms) May 10 21:31:35.780: INFO: (11) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 2.592257ms) May 10 21:31:35.780: INFO: (11) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 2.605571ms) May 10 21:31:35.783: INFO: (11) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 5.217156ms) May 10 21:31:35.783: INFO: (11) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 5.263633ms) May 10 21:31:35.783: INFO: (11) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testtest (200; 5.470506ms) May 10 21:31:35.783: INFO: (11) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 5.508297ms) May 10 21:31:35.783: INFO: (11) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: te... (200; 6.133529ms) May 10 21:31:35.786: INFO: (12) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 2.488282ms) May 10 21:31:35.786: INFO: (12) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 2.600358ms) May 10 21:31:35.786: INFO: (12) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 2.676363ms) May 10 21:31:35.790: INFO: (12) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 6.183643ms) May 10 21:31:35.790: INFO: (12) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 6.220835ms) May 10 21:31:35.790: INFO: (12) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: te... (200; 8.51213ms) May 10 21:31:35.792: INFO: (12) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 8.519076ms) May 10 21:31:35.792: INFO: (12) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 8.601498ms) May 10 21:31:35.792: INFO: (12) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testtesttest (200; 4.258492ms) May 10 21:31:35.798: INFO: (13) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 4.295693ms) May 10 21:31:35.798: INFO: (13) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 4.359731ms) May 10 21:31:35.798: INFO: (13) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 4.385265ms) May 10 21:31:35.798: INFO: (13) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 4.320343ms) May 10 21:31:35.798: INFO: (13) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:1080/proxy/: te... (200; 4.458139ms) May 10 21:31:35.798: INFO: (13) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 4.428559ms) May 10 21:31:35.798: INFO: (13) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname2/proxy/: tls qux (200; 4.492806ms) May 10 21:31:35.799: INFO: (13) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 5.621011ms) May 10 21:31:35.799: INFO: (13) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 5.866286ms) May 10 21:31:35.804: INFO: (14) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 4.835257ms) May 10 21:31:35.805: INFO: (14) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 5.421466ms) May 10 21:31:35.805: INFO: (14) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 5.93256ms) May 10 21:31:35.805: INFO: (14) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:1080/proxy/: te... (200; 5.91137ms) May 10 21:31:35.806: INFO: (14) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 7.049864ms) May 10 21:31:35.806: INFO: (14) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 6.993048ms) May 10 21:31:35.807: INFO: (14) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 7.101263ms) May 10 21:31:35.807: INFO: (14) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testtest (200; 3.025446ms) May 10 21:31:35.811: INFO: (15) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 3.913059ms) May 10 21:31:35.812: INFO: (15) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testte... (200; 4.903938ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 4.98596ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname2/proxy/: tls qux (200; 4.972096ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 4.965628ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 5.168466ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 5.128052ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 5.016971ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 5.284217ms) May 10 21:31:35.813: INFO: (15) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 5.327114ms) May 10 21:31:35.816: INFO: (16) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 3.264701ms) May 10 21:31:35.816: INFO: (16) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:460/proxy/: tls baz (200; 3.323042ms) May 10 21:31:35.817: INFO: (16) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 3.386278ms) May 10 21:31:35.817: INFO: (16) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 3.93379ms) May 10 21:31:35.817: INFO: (16) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testtest (200; 5.057041ms) May 10 21:31:35.818: INFO: (16) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 5.050381ms) May 10 21:31:35.818: INFO: (16) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname2/proxy/: bar (200; 5.03211ms) May 10 21:31:35.818: INFO: (16) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 5.0876ms) May 10 21:31:35.818: INFO: (16) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 5.05729ms) May 10 21:31:35.818: INFO: (16) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 5.103303ms) May 10 21:31:35.818: INFO: (16) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 5.072313ms) May 10 21:31:35.818: INFO: (16) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:1080/proxy/: te... (200; 5.233268ms) May 10 21:31:35.821: INFO: (17) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 2.497761ms) May 10 21:31:35.821: INFO: (17) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 2.851035ms) May 10 21:31:35.822: INFO: (17) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testte... (200; 3.877748ms) May 10 21:31:35.823: INFO: (17) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 4.372028ms) May 10 21:31:35.823: INFO: (17) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: test (200; 4.977076ms) May 10 21:31:35.823: INFO: (17) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 4.98201ms) May 10 21:31:35.824: INFO: (17) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname2/proxy/: bar (200; 5.375516ms) May 10 21:31:35.824: INFO: (17) /api/v1/namespaces/proxy-10/services/proxy-service-2wh9c:portname1/proxy/: foo (200; 5.402119ms) May 10 21:31:35.824: INFO: (17) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname2/proxy/: tls qux (200; 5.414489ms) May 10 21:31:35.827: INFO: (18) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 3.462879ms) May 10 21:31:35.827: INFO: (18) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 3.550771ms) May 10 21:31:35.827: INFO: (18) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk/proxy/: test (200; 3.510889ms) May 10 21:31:35.827: INFO: (18) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 3.484576ms) May 10 21:31:35.827: INFO: (18) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:1080/proxy/: te... (200; 3.484829ms) May 10 21:31:35.827: INFO: (18) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:443/proxy/: testtest (200; 8.068935ms) May 10 21:31:35.862: INFO: (19) /api/v1/namespaces/proxy-10/pods/https:proxy-service-2wh9c-xgvsk:462/proxy/: tls qux (200; 7.997298ms) May 10 21:31:35.862: INFO: (19) /api/v1/namespaces/proxy-10/services/http:proxy-service-2wh9c:portname1/proxy/: foo (200; 8.236183ms) May 10 21:31:35.862: INFO: (19) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:1080/proxy/: testte... (200; 8.396416ms) May 10 21:31:35.863: INFO: (19) /api/v1/namespaces/proxy-10/pods/http:proxy-service-2wh9c-xgvsk:160/proxy/: foo (200; 8.417776ms) May 10 21:31:35.863: INFO: (19) /api/v1/namespaces/proxy-10/pods/proxy-service-2wh9c-xgvsk:162/proxy/: bar (200; 8.401887ms) May 10 21:31:35.863: INFO: (19) /api/v1/namespaces/proxy-10/services/https:proxy-service-2wh9c:tlsportname1/proxy/: tls baz (200; 8.517126ms) STEP: deleting ReplicationController proxy-service-2wh9c in namespace proxy-10, will wait for the garbage collector to delete the pods May 10 21:31:35.922: INFO: Deleting ReplicationController proxy-service-2wh9c took: 6.998895ms May 10 21:31:36.223: INFO: Terminating ReplicationController proxy-service-2wh9c pods took: 300.270686ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:31:49.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-10" for this suite. • [SLOW TEST:23.054 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":80,"skipped":1318,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:31:49.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 10 21:31:49.636: INFO: Pod name pod-release: Found 0 pods out of 1 May 10 21:31:54.674: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:31:54.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1503" for this suite. • [SLOW TEST:5.321 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":81,"skipped":1332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:31:54.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 10 21:31:54.975: INFO: Waiting up to 5m0s for pod "downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981" in namespace "downward-api-9350" to be "success or failure" May 10 21:31:55.007: INFO: Pod "downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981": Phase="Pending", Reason="", readiness=false. Elapsed: 31.874912ms May 10 21:31:57.011: INFO: Pod "downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035921908s May 10 21:31:59.014: INFO: Pod "downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981": Phase="Running", Reason="", readiness=true. Elapsed: 4.039734196s May 10 21:32:01.018: INFO: Pod "downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043207001s STEP: Saw pod success May 10 21:32:01.018: INFO: Pod "downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981" satisfied condition "success or failure" May 10 21:32:01.020: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981 container dapi-container: STEP: delete the pod May 10 21:32:01.042: INFO: Waiting for pod downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981 to disappear May 10 21:32:01.066: INFO: Pod downward-api-a3f448e8-3f46-4387-9613-9dfe256e7981 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:01.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9350" for this suite. • [SLOW TEST:6.218 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1405,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:01.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 10 21:32:01.122: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 10 21:32:01.152: INFO: Waiting for terminating namespaces to be deleted... May 10 21:32:01.154: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 10 21:32:01.171: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:32:01.171: INFO: Container kindnet-cni ready: true, restart count 0 May 10 21:32:01.171: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:32:01.171: INFO: Container kube-proxy ready: true, restart count 0 May 10 21:32:01.171: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 10 21:32:01.176: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:32:01.176: INFO: Container kube-proxy ready: true, restart count 0 May 10 21:32:01.176: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 10 21:32:01.176: INFO: Container kube-hunter ready: false, restart count 0 May 10 21:32:01.176: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:32:01.176: INFO: Container kindnet-cni ready: true, restart count 0 May 10 21:32:01.176: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 10 21:32:01.176: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c1d2821b-523f-47c7-9fc8-5626fc41f5dc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c1d2821b-523f-47c7-9fc8-5626fc41f5dc off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c1d2821b-523f-47c7-9fc8-5626fc41f5dc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:09.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1425" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.457 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":83,"skipped":1412,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:09.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:25.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5933" for this suite. • [SLOW TEST:16.292 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":84,"skipped":1419,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:25.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 10 21:32:32.511: INFO: Successfully updated pod "adopt-release-2blk9" STEP: Checking that the Job readopts the Pod May 10 21:32:32.511: INFO: Waiting up to 15m0s for pod "adopt-release-2blk9" in namespace "job-9612" to be "adopted" May 10 21:32:32.517: INFO: Pod "adopt-release-2blk9": Phase="Running", Reason="", readiness=true. Elapsed: 6.232929ms May 10 21:32:34.521: INFO: Pod "adopt-release-2blk9": Phase="Running", Reason="", readiness=true. Elapsed: 2.009701354s May 10 21:32:34.521: INFO: Pod "adopt-release-2blk9" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 10 21:32:35.036: INFO: Successfully updated pod "adopt-release-2blk9" STEP: Checking that the Job releases the Pod May 10 21:32:35.036: INFO: Waiting up to 15m0s for pod "adopt-release-2blk9" in namespace "job-9612" to be "released" May 10 21:32:35.106: INFO: Pod "adopt-release-2blk9": Phase="Running", Reason="", readiness=true. Elapsed: 70.228288ms May 10 21:32:37.110: INFO: Pod "adopt-release-2blk9": Phase="Running", Reason="", readiness=true. Elapsed: 2.073980468s May 10 21:32:37.110: INFO: Pod "adopt-release-2blk9" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:37.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9612" for this suite. • [SLOW TEST:11.300 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":85,"skipped":1420,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:37.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:32:37.260: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"49e3a5e8-7c85-4f3a-8d9b-0cc83dde3010", Controller:(*bool)(0xc00580a3da), BlockOwnerDeletion:(*bool)(0xc00580a3db)}} May 10 21:32:37.298: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7dfe2e99-b538-464f-9cb0-d1f03e8d566b", Controller:(*bool)(0xc0056b0cba), BlockOwnerDeletion:(*bool)(0xc0056b0cbb)}} May 10 21:32:37.351: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a479a6ec-d142-4657-b0aa-cae2b2b52917", Controller:(*bool)(0xc00257b3c2), BlockOwnerDeletion:(*bool)(0xc00257b3c3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:42.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1847" for this suite. • [SLOW TEST:5.260 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":86,"skipped":1424,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:42.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 10 21:32:42.503: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3729 /api/v1/namespaces/watch-3729/configmaps/e2e-watch-test-watch-closed 0b2df745-78ca-42b9-8781-191f5afbf29a 15064884 0 2020-05-10 21:32:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 10 21:32:42.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3729 /api/v1/namespaces/watch-3729/configmaps/e2e-watch-test-watch-closed 0b2df745-78ca-42b9-8781-191f5afbf29a 15064885 0 2020-05-10 21:32:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 10 21:32:42.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3729 /api/v1/namespaces/watch-3729/configmaps/e2e-watch-test-watch-closed 0b2df745-78ca-42b9-8781-191f5afbf29a 15064886 0 2020-05-10 21:32:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 10 21:32:42.529: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3729 /api/v1/namespaces/watch-3729/configmaps/e2e-watch-test-watch-closed 0b2df745-78ca-42b9-8781-191f5afbf29a 15064887 0 2020-05-10 21:32:42 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:42.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3729" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":87,"skipped":1438,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:42.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:32:43.060: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:32:45.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743163, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743163, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743163, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743163, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:32:48.110: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:48.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3333" for this suite. STEP: Destroying namespace "webhook-3333-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.847 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":88,"skipped":1456,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:48.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:32:48.512: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 10 21:32:53.515: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 10 21:32:53.515: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 10 21:32:53.537: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7855 /apis/apps/v1/namespaces/deployment-7855/deployments/test-cleanup-deployment ffed7ff8-9c84-492e-bac7-d49304601c75 15065020 1 2020-05-10 21:32:53 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030efe38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 10 21:32:53.620: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-7855 /apis/apps/v1/namespaces/deployment-7855/replicasets/test-cleanup-deployment-55ffc6b7b6 b46c8e7a-020c-46ad-b76a-232269a37aa5 15065027 1 2020-05-10 21:32:53 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment ffed7ff8-9c84-492e-bac7-d49304601c75 0xc00585eca7 0xc00585eca8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00585eda8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 10 21:32:53.620: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 10 21:32:53.620: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7855 /apis/apps/v1/namespaces/deployment-7855/replicasets/test-cleanup-controller d6fd3efb-2791-4522-9184-18932287c2ba 15065021 1 2020-05-10 21:32:48 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment ffed7ff8-9c84-492e-bac7-d49304601c75 0xc00585e777 0xc00585e778}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00585eb78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 10 21:32:53.664: INFO: Pod "test-cleanup-controller-rzc9x" is available: &Pod{ObjectMeta:{test-cleanup-controller-rzc9x test-cleanup-controller- deployment-7855 /api/v1/namespaces/deployment-7855/pods/test-cleanup-controller-rzc9x 79091abd-d811-48e1-8efc-ffc058a3aeb7 15065002 0 2020-05-10 21:32:48 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller d6fd3efb-2791-4522-9184-18932287c2ba 0xc00585fe97 0xc00585fe98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8ptgv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8ptgv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8ptgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 21:32:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 21:32:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 21:32:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 21:32:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.153,StartTime:2020-05-10 21:32:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 21:32:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9b975daf93ec085e8f4bd3999f332b49e527a671945948034f39e98eb7209cae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 21:32:53.664: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-6z744" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-6z744 test-cleanup-deployment-55ffc6b7b6- deployment-7855 /api/v1/namespaces/deployment-7855/pods/test-cleanup-deployment-55ffc6b7b6-6z744 3e9bb63c-f948-44ff-ae44-653eb78c7392 15065028 0 2020-05-10 21:32:53 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 b46c8e7a-020c-46ad-b76a-232269a37aa5 0xc00565a127 0xc00565a128}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8ptgv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8ptgv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8ptgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 21:32:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:32:53.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7855" for this suite. • [SLOW TEST:5.319 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":89,"skipped":1461,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:32:53.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 10 21:33:04.044: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 10 21:33:04.048: INFO: Pod pod-with-prestop-http-hook still exists May 10 21:33:06.049: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 10 21:33:06.072: INFO: Pod pod-with-prestop-http-hook still exists May 10 21:33:08.049: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 10 21:33:08.054: INFO: Pod pod-with-prestop-http-hook still exists May 10 21:33:10.049: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 10 21:33:10.053: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:33:10.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-840" for this suite. • [SLOW TEST:16.363 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:33:10.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1819 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1819;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1819 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1819;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1819.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1819.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1819.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1819.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1819.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1819.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1819.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 237.110.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.110.237_udp@PTR;check="$$(dig +tcp +noall +answer +search 237.110.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.110.237_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1819 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1819;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1819 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1819;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1819.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1819.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1819.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1819.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1819.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1819.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1819.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1819.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1819.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 237.110.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.110.237_udp@PTR;check="$$(dig +tcp +noall +answer +search 237.110.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.110.237_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:33:18.370: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.374: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.384: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.387: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.390: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.392: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.412: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.418: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.422: INFO: Unable to read jessie_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.425: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.427: INFO: Unable to read jessie_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.432: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.434: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:18.450: INFO: Lookups using dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1819 wheezy_tcp@dns-test-service.dns-1819 wheezy_udp@dns-test-service.dns-1819.svc wheezy_tcp@dns-test-service.dns-1819.svc wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1819 jessie_tcp@dns-test-service.dns-1819 jessie_udp@dns-test-service.dns-1819.svc jessie_tcp@dns-test-service.dns-1819.svc jessie_udp@_http._tcp.dns-test-service.dns-1819.svc jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc] May 10 21:33:23.455: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.459: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.463: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.466: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.469: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.472: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.475: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.478: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.499: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.501: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.503: INFO: Unable to read jessie_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.505: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.507: INFO: Unable to read jessie_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.514: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:23.528: INFO: Lookups using dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1819 wheezy_tcp@dns-test-service.dns-1819 wheezy_udp@dns-test-service.dns-1819.svc wheezy_tcp@dns-test-service.dns-1819.svc wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1819 jessie_tcp@dns-test-service.dns-1819 jessie_udp@dns-test-service.dns-1819.svc jessie_tcp@dns-test-service.dns-1819.svc jessie_udp@_http._tcp.dns-test-service.dns-1819.svc jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc] May 10 21:33:28.455: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.458: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.464: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.468: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.474: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.477: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.500: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.503: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.507: INFO: Unable to read jessie_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.510: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.513: INFO: Unable to read jessie_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.517: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.520: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.522: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:28.539: INFO: Lookups using dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1819 wheezy_tcp@dns-test-service.dns-1819 wheezy_udp@dns-test-service.dns-1819.svc wheezy_tcp@dns-test-service.dns-1819.svc wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1819 jessie_tcp@dns-test-service.dns-1819 jessie_udp@dns-test-service.dns-1819.svc jessie_tcp@dns-test-service.dns-1819.svc jessie_udp@_http._tcp.dns-test-service.dns-1819.svc jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc] May 10 21:33:33.455: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.458: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.464: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.466: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.469: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.472: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.475: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.492: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.494: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.496: INFO: Unable to read jessie_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.500: INFO: Unable to read jessie_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.507: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:33.522: INFO: Lookups using dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1819 wheezy_tcp@dns-test-service.dns-1819 wheezy_udp@dns-test-service.dns-1819.svc wheezy_tcp@dns-test-service.dns-1819.svc wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1819 jessie_tcp@dns-test-service.dns-1819 jessie_udp@dns-test-service.dns-1819.svc jessie_tcp@dns-test-service.dns-1819.svc jessie_udp@_http._tcp.dns-test-service.dns-1819.svc jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc] May 10 21:33:38.455: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.476: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.482: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.487: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.489: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.494: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.496: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.513: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.515: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.518: INFO: Unable to read jessie_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.520: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.522: INFO: Unable to read jessie_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.525: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.527: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.529: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:38.588: INFO: Lookups using dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1819 wheezy_tcp@dns-test-service.dns-1819 wheezy_udp@dns-test-service.dns-1819.svc wheezy_tcp@dns-test-service.dns-1819.svc wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1819 jessie_tcp@dns-test-service.dns-1819 jessie_udp@dns-test-service.dns-1819.svc jessie_tcp@dns-test-service.dns-1819.svc jessie_udp@_http._tcp.dns-test-service.dns-1819.svc jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc] May 10 21:33:43.521: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.526: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.530: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.533: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.537: INFO: Unable to read wheezy_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.539: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.542: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.544: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.590: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.592: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.594: INFO: Unable to read jessie_udp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819 from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.598: INFO: Unable to read jessie_udp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.600: INFO: Unable to read jessie_tcp@dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.605: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc from pod dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc: the server could not find the requested resource (get pods dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc) May 10 21:33:43.620: INFO: Lookups using dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1819 wheezy_tcp@dns-test-service.dns-1819 wheezy_udp@dns-test-service.dns-1819.svc wheezy_tcp@dns-test-service.dns-1819.svc wheezy_udp@_http._tcp.dns-test-service.dns-1819.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1819.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1819 jessie_tcp@dns-test-service.dns-1819 jessie_udp@dns-test-service.dns-1819.svc jessie_tcp@dns-test-service.dns-1819.svc jessie_udp@_http._tcp.dns-test-service.dns-1819.svc jessie_tcp@_http._tcp.dns-test-service.dns-1819.svc] May 10 21:33:48.525: INFO: DNS probes using dns-1819/dns-test-f56ad876-ea4d-4f06-94a2-52b9671538fc succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:33:49.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1819" for this suite. • [SLOW TEST:39.039 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":91,"skipped":1496,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:33:49.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-bzv2 STEP: Creating a pod to test atomic-volume-subpath May 10 21:33:49.181: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bzv2" in namespace "subpath-6511" to be "success or failure" May 10 21:33:49.186: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375924ms May 10 21:33:51.190: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008841929s May 10 21:33:53.195: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 4.01334418s May 10 21:33:55.199: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 6.017691847s May 10 21:33:57.204: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 8.022356643s May 10 21:33:59.208: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 10.026762919s May 10 21:34:01.213: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 12.031968545s May 10 21:34:03.217: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 14.03567878s May 10 21:34:05.223: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 16.041557038s May 10 21:34:07.226: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 18.045039035s May 10 21:34:09.230: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 20.048285927s May 10 21:34:11.234: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 22.052306224s May 10 21:34:13.238: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Running", Reason="", readiness=true. Elapsed: 24.056403995s May 10 21:34:15.242: INFO: Pod "pod-subpath-test-projected-bzv2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061047443s STEP: Saw pod success May 10 21:34:15.242: INFO: Pod "pod-subpath-test-projected-bzv2" satisfied condition "success or failure" May 10 21:34:15.249: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-bzv2 container test-container-subpath-projected-bzv2: STEP: delete the pod May 10 21:34:15.266: INFO: Waiting for pod pod-subpath-test-projected-bzv2 to disappear May 10 21:34:15.271: INFO: Pod pod-subpath-test-projected-bzv2 no longer exists STEP: Deleting pod pod-subpath-test-projected-bzv2 May 10 21:34:15.271: INFO: Deleting pod "pod-subpath-test-projected-bzv2" in namespace "subpath-6511" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:34:15.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6511" for this suite. • [SLOW TEST:26.175 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":92,"skipped":1500,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:34:15.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-da87f245-cf2f-444c-82de-6fbd0a8e466b STEP: Creating a pod to test consume configMaps May 10 21:34:15.376: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86" in namespace "projected-4465" to be "success or failure" May 10 21:34:15.379: INFO: Pod "pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86": Phase="Pending", Reason="", readiness=false. Elapsed: 3.092722ms May 10 21:34:17.383: INFO: Pod "pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006529134s May 10 21:34:19.485: INFO: Pod "pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108793625s STEP: Saw pod success May 10 21:34:19.485: INFO: Pod "pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86" satisfied condition "success or failure" May 10 21:34:19.488: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86 container projected-configmap-volume-test: STEP: delete the pod May 10 21:34:19.549: INFO: Waiting for pod pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86 to disappear May 10 21:34:19.676: INFO: Pod pod-projected-configmaps-e7ee9cd3-4427-4772-936b-3e1e3d62ef86 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:34:19.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4465" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1504,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:34:19.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:34:19.745: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 10 21:34:19.752: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:19.756: INFO: Number of nodes with available pods: 0 May 10 21:34:19.756: INFO: Node jerma-worker is running more than one daemon pod May 10 21:34:20.772: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:20.775: INFO: Number of nodes with available pods: 0 May 10 21:34:20.775: INFO: Node jerma-worker is running more than one daemon pod May 10 21:34:21.761: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:21.764: INFO: Number of nodes with available pods: 0 May 10 21:34:21.764: INFO: Node jerma-worker is running more than one daemon pod May 10 21:34:22.761: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:22.765: INFO: Number of nodes with available pods: 0 May 10 21:34:22.765: INFO: Node jerma-worker is running more than one daemon pod May 10 21:34:23.761: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:23.765: INFO: Number of nodes with available pods: 1 May 10 21:34:23.765: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:34:24.760: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:24.763: INFO: Number of nodes with available pods: 2 May 10 21:34:24.763: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 10 21:34:24.830: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:24.830: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:24.875: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:25.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:25.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:25.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:26.879: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:26.879: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:26.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:27.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:27.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:27.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:27.885: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:28.881: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:28.881: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:28.881: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:28.885: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:29.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:29.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:29.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:29.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:30.887: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:30.887: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:30.887: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:30.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:31.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:31.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:31.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:31.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:32.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:32.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:32.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:32.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:33.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:33.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:33.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:33.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:34.879: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:34.879: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:34.879: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:34.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:35.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:35.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:35.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:35.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:36.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:36.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:36.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:36.887: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:37.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:37.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:37.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:37.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:38.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:38.880: INFO: Wrong image for pod: daemon-set-zt2m7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:38.880: INFO: Pod daemon-set-zt2m7 is not available May 10 21:34:38.885: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:39.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:39.880: INFO: Pod daemon-set-zf4p4 is not available May 10 21:34:39.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:40.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:40.880: INFO: Pod daemon-set-zf4p4 is not available May 10 21:34:40.888: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:42.127: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:42.127: INFO: Pod daemon-set-zf4p4 is not available May 10 21:34:42.132: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:42.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:42.880: INFO: Pod daemon-set-zf4p4 is not available May 10 21:34:42.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:43.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:43.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:44.879: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:44.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:45.880: INFO: Wrong image for pod: daemon-set-mm5bx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 10 21:34:45.880: INFO: Pod daemon-set-mm5bx is not available May 10 21:34:45.884: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:46.879: INFO: Pod daemon-set-n98t7 is not available May 10 21:34:46.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 10 21:34:46.886: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:46.889: INFO: Number of nodes with available pods: 1 May 10 21:34:46.889: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:34:48.055: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:48.059: INFO: Number of nodes with available pods: 1 May 10 21:34:48.059: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:34:48.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:48.906: INFO: Number of nodes with available pods: 1 May 10 21:34:48.906: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:34:49.894: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:34:49.897: INFO: Number of nodes with available pods: 2 May 10 21:34:49.898: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-117, will wait for the garbage collector to delete the pods May 10 21:34:49.990: INFO: Deleting DaemonSet.extensions daemon-set took: 5.636807ms May 10 21:34:50.290: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.208996ms May 10 21:34:59.500: INFO: Number of nodes with available pods: 0 May 10 21:34:59.500: INFO: Number of running nodes: 0, number of available pods: 0 May 10 21:34:59.502: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-117/daemonsets","resourceVersion":"15065681"},"items":null} May 10 21:34:59.505: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-117/pods","resourceVersion":"15065681"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:34:59.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-117" for this suite. • [SLOW TEST:39.856 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":94,"skipped":1525,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:34:59.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5e3abca9-0a35-4eb1-86e8-1e70c8ab6512 STEP: Creating a pod to test consume configMaps May 10 21:34:59.604: INFO: Waiting up to 5m0s for pod "pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5" in namespace "configmap-2623" to be "success or failure" May 10 21:34:59.626: INFO: Pod "pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.536937ms May 10 21:35:01.630: INFO: Pod "pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025879714s May 10 21:35:03.659: INFO: Pod "pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05507231s STEP: Saw pod success May 10 21:35:03.659: INFO: Pod "pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5" satisfied condition "success or failure" May 10 21:35:03.662: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5 container configmap-volume-test: STEP: delete the pod May 10 21:35:03.718: INFO: Waiting for pod pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5 to disappear May 10 21:35:03.738: INFO: Pod pod-configmaps-e677ccc4-cc9a-4f44-a9f9-c7ec0da6f8c5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:35:03.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2623" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:35:03.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8062.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8062.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8062.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8062.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:35:09.987: INFO: DNS probes using dns-8062/dns-test-47cfa7a2-b7a8-468d-9079-0e6745bf23d7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:35:09.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8062" for this suite. • [SLOW TEST:6.290 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":96,"skipped":1555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:35:10.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7579 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7579 to expose endpoints map[] May 10 21:35:10.403: INFO: Get endpoints failed (242.393107ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 10 21:35:11.407: INFO: successfully validated that service endpoint-test2 in namespace services-7579 exposes endpoints map[] (1.246469892s elapsed) STEP: Creating pod pod1 in namespace services-7579 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7579 to expose endpoints map[pod1:[80]] May 10 21:35:14.513: INFO: successfully validated that service endpoint-test2 in namespace services-7579 exposes endpoints map[pod1:[80]] (3.098762674s elapsed) STEP: Creating pod pod2 in namespace services-7579 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7579 to expose endpoints map[pod1:[80] pod2:[80]] May 10 21:35:17.646: INFO: successfully validated that service endpoint-test2 in namespace services-7579 exposes endpoints map[pod1:[80] pod2:[80]] (3.129488353s elapsed) STEP: Deleting pod pod1 in namespace services-7579 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7579 to expose endpoints map[pod2:[80]] May 10 21:35:17.761: INFO: successfully validated that service endpoint-test2 in namespace services-7579 exposes endpoints map[pod2:[80]] (109.829893ms elapsed) STEP: Deleting pod pod2 in namespace services-7579 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7579 to expose endpoints map[] May 10 21:35:19.043: INFO: successfully validated that service endpoint-test2 in namespace services-7579 exposes endpoints map[] (1.178070759s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:35:19.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7579" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.090 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":97,"skipped":1592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:35:19.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-893e1da7-8aeb-45b1-8d87-0a959aaa7719 STEP: Creating a pod to test consume configMaps May 10 21:35:19.192: INFO: Waiting up to 5m0s for pod "pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb" in namespace "configmap-4471" to be "success or failure" May 10 21:35:19.195: INFO: Pod "pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.311755ms May 10 21:35:21.199: INFO: Pod "pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006835068s May 10 21:35:23.202: INFO: Pod "pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010093928s STEP: Saw pod success May 10 21:35:23.202: INFO: Pod "pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb" satisfied condition "success or failure" May 10 21:35:23.204: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb container configmap-volume-test: STEP: delete the pod May 10 21:35:23.227: INFO: Waiting for pod pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb to disappear May 10 21:35:23.231: INFO: Pod pod-configmaps-691dbb34-2ff3-497e-8728-c7129ac8b0eb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:35:23.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4471" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1620,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:35:23.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 10 21:35:23.446: INFO: Waiting up to 5m0s for pod "pod-8332e537-0f67-47f0-9565-86e36f2127db" in namespace "emptydir-6609" to be "success or failure" May 10 21:35:23.459: INFO: Pod "pod-8332e537-0f67-47f0-9565-86e36f2127db": Phase="Pending", Reason="", readiness=false. Elapsed: 13.351157ms May 10 21:35:25.464: INFO: Pod "pod-8332e537-0f67-47f0-9565-86e36f2127db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017832855s May 10 21:35:27.467: INFO: Pod "pod-8332e537-0f67-47f0-9565-86e36f2127db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0209361s STEP: Saw pod success May 10 21:35:27.467: INFO: Pod "pod-8332e537-0f67-47f0-9565-86e36f2127db" satisfied condition "success or failure" May 10 21:35:27.468: INFO: Trying to get logs from node jerma-worker2 pod pod-8332e537-0f67-47f0-9565-86e36f2127db container test-container: STEP: delete the pod May 10 21:35:27.504: INFO: Waiting for pod pod-8332e537-0f67-47f0-9565-86e36f2127db to disappear May 10 21:35:27.525: INFO: Pod pod-8332e537-0f67-47f0-9565-86e36f2127db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:35:27.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6609" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1620,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:35:27.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 10 21:35:27.884: INFO: Waiting up to 5m0s for pod "pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b" in namespace "emptydir-244" to be "success or failure" May 10 21:35:27.933: INFO: Pod "pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.857888ms May 10 21:35:29.970: INFO: Pod "pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086780158s May 10 21:35:31.974: INFO: Pod "pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090214192s STEP: Saw pod success May 10 21:35:31.974: INFO: Pod "pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b" satisfied condition "success or failure" May 10 21:35:31.976: INFO: Trying to get logs from node jerma-worker pod pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b container test-container: STEP: delete the pod May 10 21:35:32.047: INFO: Waiting for pod pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b to disappear May 10 21:35:32.072: INFO: Pod pod-b4e68372-83c7-4b25-9dc1-ad561b3d015b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:35:32.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-244" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1629,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:35:32.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-136e7d1c-7900-4eed-af6d-9236bc55adac STEP: Creating a pod to test consume configMaps May 10 21:35:32.148: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30" in namespace "projected-4650" to be "success or failure" May 10 21:35:32.152: INFO: Pod "pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409899ms May 10 21:35:34.156: INFO: Pod "pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007360519s May 10 21:35:36.160: INFO: Pod "pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01134299s STEP: Saw pod success May 10 21:35:36.160: INFO: Pod "pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30" satisfied condition "success or failure" May 10 21:35:36.163: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30 container projected-configmap-volume-test: STEP: delete the pod May 10 21:35:36.216: INFO: Waiting for pod pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30 to disappear May 10 21:35:36.252: INFO: Pod pod-projected-configmaps-427395e9-708c-4fe7-8592-6078ca583f30 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:35:36.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4650" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1633,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:35:36.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-421 STEP: creating a selector STEP: Creating the service pods in kubernetes May 10 21:35:36.330: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 10 21:36:00.502: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:8080/dial?request=hostname&protocol=udp&host=10.244.1.247&port=8081&tries=1'] Namespace:pod-network-test-421 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:36:00.502: INFO: >>> kubeConfig: /root/.kube/config I0510 21:36:00.536564 6 log.go:172] (0xc002c886e0) (0xc00147f7c0) Create stream I0510 21:36:00.536614 6 log.go:172] (0xc002c886e0) (0xc00147f7c0) Stream added, broadcasting: 1 I0510 21:36:00.539675 6 log.go:172] (0xc002c886e0) Reply frame received for 1 I0510 21:36:00.539716 6 log.go:172] (0xc002c886e0) (0xc00147f860) Create stream I0510 21:36:00.539732 6 log.go:172] (0xc002c886e0) (0xc00147f860) Stream added, broadcasting: 3 I0510 21:36:00.540655 6 log.go:172] (0xc002c886e0) Reply frame received for 3 I0510 21:36:00.540687 6 log.go:172] (0xc002c886e0) (0xc00147fae0) Create stream I0510 21:36:00.540696 6 log.go:172] (0xc002c886e0) (0xc00147fae0) Stream added, broadcasting: 5 I0510 21:36:00.541794 6 log.go:172] (0xc002c886e0) Reply frame received for 5 I0510 21:36:00.597372 6 log.go:172] (0xc002c886e0) Data frame received for 3 I0510 21:36:00.597460 6 log.go:172] (0xc00147f860) (3) Data frame handling I0510 21:36:00.597498 6 log.go:172] (0xc00147f860) (3) Data frame sent I0510 21:36:00.598586 6 log.go:172] (0xc002c886e0) Data frame received for 3 I0510 21:36:00.598603 6 log.go:172] (0xc00147f860) (3) Data frame handling I0510 21:36:00.598836 6 log.go:172] (0xc002c886e0) Data frame received for 5 I0510 21:36:00.598860 6 log.go:172] (0xc00147fae0) (5) Data frame handling I0510 21:36:00.600799 6 log.go:172] (0xc002c886e0) Data frame received for 1 I0510 21:36:00.600814 6 log.go:172] (0xc00147f7c0) (1) Data frame handling I0510 21:36:00.600839 6 log.go:172] (0xc00147f7c0) (1) Data frame sent I0510 21:36:00.600861 6 log.go:172] (0xc002c886e0) (0xc00147f7c0) Stream removed, broadcasting: 1 I0510 21:36:00.600890 6 log.go:172] (0xc002c886e0) Go away received I0510 21:36:00.601225 6 log.go:172] (0xc002c886e0) (0xc00147f7c0) Stream removed, broadcasting: 1 I0510 21:36:00.601284 6 log.go:172] (0xc002c886e0) (0xc00147f860) Stream removed, broadcasting: 3 I0510 21:36:00.601296 6 log.go:172] (0xc002c886e0) (0xc00147fae0) Stream removed, broadcasting: 5 May 10 21:36:00.601: INFO: Waiting for responses: map[] May 10 21:36:00.618: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:8080/dial?request=hostname&protocol=udp&host=10.244.2.163&port=8081&tries=1'] Namespace:pod-network-test-421 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:36:00.618: INFO: >>> kubeConfig: /root/.kube/config I0510 21:36:00.654988 6 log.go:172] (0xc001994bb0) (0xc0011d1540) Create stream I0510 21:36:00.655024 6 log.go:172] (0xc001994bb0) (0xc0011d1540) Stream added, broadcasting: 1 I0510 21:36:00.657799 6 log.go:172] (0xc001994bb0) Reply frame received for 1 I0510 21:36:00.657870 6 log.go:172] (0xc001994bb0) (0xc0014501e0) Create stream I0510 21:36:00.657919 6 log.go:172] (0xc001994bb0) (0xc0014501e0) Stream added, broadcasting: 3 I0510 21:36:00.658925 6 log.go:172] (0xc001994bb0) Reply frame received for 3 I0510 21:36:00.658962 6 log.go:172] (0xc001994bb0) (0xc00147fd60) Create stream I0510 21:36:00.658974 6 log.go:172] (0xc001994bb0) (0xc00147fd60) Stream added, broadcasting: 5 I0510 21:36:00.659732 6 log.go:172] (0xc001994bb0) Reply frame received for 5 I0510 21:36:00.727557 6 log.go:172] (0xc001994bb0) Data frame received for 3 I0510 21:36:00.727596 6 log.go:172] (0xc0014501e0) (3) Data frame handling I0510 21:36:00.727622 6 log.go:172] (0xc0014501e0) (3) Data frame sent I0510 21:36:00.727766 6 log.go:172] (0xc001994bb0) Data frame received for 3 I0510 21:36:00.727785 6 log.go:172] (0xc0014501e0) (3) Data frame handling I0510 21:36:00.727926 6 log.go:172] (0xc001994bb0) Data frame received for 5 I0510 21:36:00.727948 6 log.go:172] (0xc00147fd60) (5) Data frame handling I0510 21:36:00.729712 6 log.go:172] (0xc001994bb0) Data frame received for 1 I0510 21:36:00.729729 6 log.go:172] (0xc0011d1540) (1) Data frame handling I0510 21:36:00.729738 6 log.go:172] (0xc0011d1540) (1) Data frame sent I0510 21:36:00.729749 6 log.go:172] (0xc001994bb0) (0xc0011d1540) Stream removed, broadcasting: 1 I0510 21:36:00.729852 6 log.go:172] (0xc001994bb0) (0xc0011d1540) Stream removed, broadcasting: 1 I0510 21:36:00.729915 6 log.go:172] (0xc001994bb0) (0xc0014501e0) Stream removed, broadcasting: 3 I0510 21:36:00.729929 6 log.go:172] (0xc001994bb0) (0xc00147fd60) Stream removed, broadcasting: 5 May 10 21:36:00.729: INFO: Waiting for responses: map[] I0510 21:36:00.729988 6 log.go:172] (0xc001994bb0) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:36:00.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-421" for this suite. • [SLOW TEST:24.478 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:36:00.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 10 21:36:00.808: INFO: Created pod &Pod{ObjectMeta:{dns-6495 dns-6495 /api/v1/namespaces/dns-6495/pods/dns-6495 f55c27fb-73c6-44c3-9806-d14efb93dbb4 15066151 0 2020-05-10 21:36:00 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4f9jq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4f9jq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4f9jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 10 21:36:04.849: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6495 PodName:dns-6495 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:36:04.849: INFO: >>> kubeConfig: /root/.kube/config I0510 21:36:04.883489 6 log.go:172] (0xc0019954a0) (0xc001ea8780) Create stream I0510 21:36:04.883516 6 log.go:172] (0xc0019954a0) (0xc001ea8780) Stream added, broadcasting: 1 I0510 21:36:04.885788 6 log.go:172] (0xc0019954a0) Reply frame received for 1 I0510 21:36:04.885817 6 log.go:172] (0xc0019954a0) (0xc001ea88c0) Create stream I0510 21:36:04.885833 6 log.go:172] (0xc0019954a0) (0xc001ea88c0) Stream added, broadcasting: 3 I0510 21:36:04.886605 6 log.go:172] (0xc0019954a0) Reply frame received for 3 I0510 21:36:04.886645 6 log.go:172] (0xc0019954a0) (0xc0014503c0) Create stream I0510 21:36:04.886663 6 log.go:172] (0xc0019954a0) (0xc0014503c0) Stream added, broadcasting: 5 I0510 21:36:04.887414 6 log.go:172] (0xc0019954a0) Reply frame received for 5 I0510 21:36:04.975330 6 log.go:172] (0xc0019954a0) Data frame received for 3 I0510 21:36:04.975353 6 log.go:172] (0xc001ea88c0) (3) Data frame handling I0510 21:36:04.975366 6 log.go:172] (0xc001ea88c0) (3) Data frame sent I0510 21:36:04.976191 6 log.go:172] (0xc0019954a0) Data frame received for 3 I0510 21:36:04.976216 6 log.go:172] (0xc001ea88c0) (3) Data frame handling I0510 21:36:04.976319 6 log.go:172] (0xc0019954a0) Data frame received for 5 I0510 21:36:04.976335 6 log.go:172] (0xc0014503c0) (5) Data frame handling I0510 21:36:04.977790 6 log.go:172] (0xc0019954a0) Data frame received for 1 I0510 21:36:04.977813 6 log.go:172] (0xc001ea8780) (1) Data frame handling I0510 21:36:04.977823 6 log.go:172] (0xc001ea8780) (1) Data frame sent I0510 21:36:04.977836 6 log.go:172] (0xc0019954a0) (0xc001ea8780) Stream removed, broadcasting: 1 I0510 21:36:04.977879 6 log.go:172] (0xc0019954a0) Go away received I0510 21:36:04.977923 6 log.go:172] (0xc0019954a0) (0xc001ea8780) Stream removed, broadcasting: 1 I0510 21:36:04.977935 6 log.go:172] (0xc0019954a0) (0xc001ea88c0) Stream removed, broadcasting: 3 I0510 21:36:04.977945 6 log.go:172] (0xc0019954a0) (0xc0014503c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 10 21:36:04.977: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6495 PodName:dns-6495 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:36:04.977: INFO: >>> kubeConfig: /root/.kube/config I0510 21:36:05.005237 6 log.go:172] (0xc00181c420) (0xc001a8fcc0) Create stream I0510 21:36:05.005266 6 log.go:172] (0xc00181c420) (0xc001a8fcc0) Stream added, broadcasting: 1 I0510 21:36:05.016028 6 log.go:172] (0xc00181c420) Reply frame received for 1 I0510 21:36:05.016107 6 log.go:172] (0xc00181c420) (0xc000ff4000) Create stream I0510 21:36:05.016129 6 log.go:172] (0xc00181c420) (0xc000ff4000) Stream added, broadcasting: 3 I0510 21:36:05.017351 6 log.go:172] (0xc00181c420) Reply frame received for 3 I0510 21:36:05.017372 6 log.go:172] (0xc00181c420) (0xc000ff41e0) Create stream I0510 21:36:05.017389 6 log.go:172] (0xc00181c420) (0xc000ff41e0) Stream added, broadcasting: 5 I0510 21:36:05.018361 6 log.go:172] (0xc00181c420) Reply frame received for 5 I0510 21:36:05.089683 6 log.go:172] (0xc00181c420) Data frame received for 3 I0510 21:36:05.089708 6 log.go:172] (0xc000ff4000) (3) Data frame handling I0510 21:36:05.089724 6 log.go:172] (0xc000ff4000) (3) Data frame sent I0510 21:36:05.090680 6 log.go:172] (0xc00181c420) Data frame received for 5 I0510 21:36:05.090699 6 log.go:172] (0xc000ff41e0) (5) Data frame handling I0510 21:36:05.090850 6 log.go:172] (0xc00181c420) Data frame received for 3 I0510 21:36:05.090881 6 log.go:172] (0xc000ff4000) (3) Data frame handling I0510 21:36:05.092170 6 log.go:172] (0xc00181c420) Data frame received for 1 I0510 21:36:05.092189 6 log.go:172] (0xc001a8fcc0) (1) Data frame handling I0510 21:36:05.092202 6 log.go:172] (0xc001a8fcc0) (1) Data frame sent I0510 21:36:05.092222 6 log.go:172] (0xc00181c420) (0xc001a8fcc0) Stream removed, broadcasting: 1 I0510 21:36:05.092239 6 log.go:172] (0xc00181c420) Go away received I0510 21:36:05.092386 6 log.go:172] (0xc00181c420) (0xc001a8fcc0) Stream removed, broadcasting: 1 I0510 21:36:05.092413 6 log.go:172] (0xc00181c420) (0xc000ff4000) Stream removed, broadcasting: 3 I0510 21:36:05.092428 6 log.go:172] (0xc00181c420) (0xc000ff41e0) Stream removed, broadcasting: 5 May 10 21:36:05.092: INFO: Deleting pod dns-6495... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:36:05.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6495" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":103,"skipped":1676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:36:05.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:36:06.660: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:36:08.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743366, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743366, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743367, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743366, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:36:11.727: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:36:11.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8055" for this suite. STEP: Destroying namespace "webhook-8055-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.909 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":104,"skipped":1710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:36:12.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 10 21:36:12.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-7044 -- logs-generator --log-lines-total 100 --run-duration 20s' May 10 21:36:12.268: INFO: stderr: "" May 10 21:36:12.268: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 10 21:36:12.268: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 10 21:36:12.268: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7044" to be "running and ready, or succeeded" May 10 21:36:12.275: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.61782ms May 10 21:36:14.279: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010507155s May 10 21:36:16.283: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.014352356s May 10 21:36:16.283: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 10 21:36:16.283: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 10 21:36:16.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7044' May 10 21:36:16.387: INFO: stderr: "" May 10 21:36:16.387: INFO: stdout: "I0510 21:36:15.064803 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/hdf 436\nI0510 21:36:15.265063 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/z6x 509\nI0510 21:36:15.465026 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/fp6 269\nI0510 21:36:15.665013 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/rq6w 220\nI0510 21:36:15.865001 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/gkt 350\nI0510 21:36:16.064971 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/qqwj 224\nI0510 21:36:16.265049 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/tbb 282\n" STEP: limiting log lines May 10 21:36:16.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7044 --tail=1' May 10 21:36:16.498: INFO: stderr: "" May 10 21:36:16.498: INFO: stdout: "I0510 21:36:16.464973 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/5gsg 450\n" May 10 21:36:16.498: INFO: got output "I0510 21:36:16.464973 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/5gsg 450\n" STEP: limiting log bytes May 10 21:36:16.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7044 --limit-bytes=1' May 10 21:36:16.601: INFO: stderr: "" May 10 21:36:16.602: INFO: stdout: "I" May 10 21:36:16.602: INFO: got output "I" STEP: exposing timestamps May 10 21:36:16.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7044 --tail=1 --timestamps' May 10 21:36:16.725: INFO: stderr: "" May 10 21:36:16.725: INFO: stdout: "2020-05-10T21:36:16.665376502Z I0510 21:36:16.664983 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/5vh 420\n" May 10 21:36:16.725: INFO: got output "2020-05-10T21:36:16.665376502Z I0510 21:36:16.664983 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/5vh 420\n" STEP: restricting to a time range May 10 21:36:19.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7044 --since=1s' May 10 21:36:19.352: INFO: stderr: "" May 10 21:36:19.352: INFO: stdout: "I0510 21:36:18.465037 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/td9j 530\nI0510 21:36:18.664982 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/6w6 391\nI0510 21:36:18.864991 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/29j 567\nI0510 21:36:19.064956 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/gwv 246\nI0510 21:36:19.264973 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/vf69 515\n" May 10 21:36:19.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7044 --since=24h' May 10 21:36:19.456: INFO: stderr: "" May 10 21:36:19.456: INFO: stdout: "I0510 21:36:15.064803 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/hdf 436\nI0510 21:36:15.265063 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/z6x 509\nI0510 21:36:15.465026 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/fp6 269\nI0510 21:36:15.665013 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/rq6w 220\nI0510 21:36:15.865001 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/gkt 350\nI0510 21:36:16.064971 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/qqwj 224\nI0510 21:36:16.265049 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/tbb 282\nI0510 21:36:16.464973 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/5gsg 450\nI0510 21:36:16.664983 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/5vh 420\nI0510 21:36:16.865032 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/wdf 211\nI0510 21:36:17.064968 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/c5sk 228\nI0510 21:36:17.264977 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/sxmx 495\nI0510 21:36:17.464952 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/wrb 318\nI0510 21:36:17.664974 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/6mn 392\nI0510 21:36:17.865050 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/tl8 264\nI0510 21:36:18.064983 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/vm56 337\nI0510 21:36:18.264978 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/lqnq 497\nI0510 21:36:18.465037 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/td9j 530\nI0510 21:36:18.664982 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/6w6 391\nI0510 21:36:18.864991 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/29j 567\nI0510 21:36:19.064956 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/gwv 246\nI0510 21:36:19.264973 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/vf69 515\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 10 21:36:19.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7044' May 10 21:36:29.487: INFO: stderr: "" May 10 21:36:29.488: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:36:29.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7044" for this suite. • [SLOW TEST:17.460 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":105,"skipped":1733,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:36:29.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:36:29.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2" in namespace "projected-7952" to be "success or failure" May 10 21:36:29.647: INFO: Pod "downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.69179ms May 10 21:36:31.651: INFO: Pod "downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019542094s May 10 21:36:33.655: INFO: Pod "downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023845529s STEP: Saw pod success May 10 21:36:33.655: INFO: Pod "downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2" satisfied condition "success or failure" May 10 21:36:33.658: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2 container client-container: STEP: delete the pod May 10 21:36:33.709: INFO: Waiting for pod downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2 to disappear May 10 21:36:33.712: INFO: Pod downwardapi-volume-9973d522-3b7a-46e9-b776-c5b1f5a003d2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:36:33.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7952" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1759,"failed":0} SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:36:33.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6676, will wait for the garbage collector to delete the pods May 10 21:36:39.836: INFO: Deleting Job.batch foo took: 6.568698ms May 10 21:36:40.136: INFO: Terminating Job.batch foo pods took: 300.217858ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:37:19.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6676" for this suite. • [SLOW TEST:45.929 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":107,"skipped":1764,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:37:19.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-53254da1-a371-4b7f-afda-747d9202d9e4 STEP: Creating a pod to test consume configMaps May 10 21:37:19.774: INFO: Waiting up to 5m0s for pod "pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c" in namespace "configmap-8297" to be "success or failure" May 10 21:37:19.778: INFO: Pod "pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.589023ms May 10 21:37:21.781: INFO: Pod "pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006992265s May 10 21:37:23.785: INFO: Pod "pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010833938s STEP: Saw pod success May 10 21:37:23.785: INFO: Pod "pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c" satisfied condition "success or failure" May 10 21:37:23.787: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c container configmap-volume-test: STEP: delete the pod May 10 21:37:23.816: INFO: Waiting for pod pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c to disappear May 10 21:37:23.820: INFO: Pod pod-configmaps-90edc713-1014-4715-a68b-c59aea01764c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:37:23.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8297" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:37:23.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 10 21:37:23.906: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 10 21:37:34.278: INFO: >>> kubeConfig: /root/.kube/config May 10 21:37:37.164: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:37:46.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8700" for this suite. • [SLOW TEST:22.774 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":109,"skipped":1815,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:37:46.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6775 STEP: creating a selector STEP: Creating the service pods in kubernetes May 10 21:37:46.654: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 10 21:38:12.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.252:8080/dial?request=hostname&protocol=http&host=10.244.1.251&port=8080&tries=1'] Namespace:pod-network-test-6775 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:38:12.728: INFO: >>> kubeConfig: /root/.kube/config I0510 21:38:12.763816 6 log.go:172] (0xc002c880b0) (0xc00147f7c0) Create stream I0510 21:38:12.763863 6 log.go:172] (0xc002c880b0) (0xc00147f7c0) Stream added, broadcasting: 1 I0510 21:38:12.766142 6 log.go:172] (0xc002c880b0) Reply frame received for 1 I0510 21:38:12.766203 6 log.go:172] (0xc002c880b0) (0xc0011d01e0) Create stream I0510 21:38:12.766222 6 log.go:172] (0xc002c880b0) (0xc0011d01e0) Stream added, broadcasting: 3 I0510 21:38:12.767020 6 log.go:172] (0xc002c880b0) Reply frame received for 3 I0510 21:38:12.767056 6 log.go:172] (0xc002c880b0) (0xc000cea000) Create stream I0510 21:38:12.767071 6 log.go:172] (0xc002c880b0) (0xc000cea000) Stream added, broadcasting: 5 I0510 21:38:12.767765 6 log.go:172] (0xc002c880b0) Reply frame received for 5 I0510 21:38:12.881024 6 log.go:172] (0xc002c880b0) Data frame received for 3 I0510 21:38:12.881066 6 log.go:172] (0xc0011d01e0) (3) Data frame handling I0510 21:38:12.881100 6 log.go:172] (0xc0011d01e0) (3) Data frame sent I0510 21:38:12.881581 6 log.go:172] (0xc002c880b0) Data frame received for 5 I0510 21:38:12.881611 6 log.go:172] (0xc000cea000) (5) Data frame handling I0510 21:38:12.882005 6 log.go:172] (0xc002c880b0) Data frame received for 3 I0510 21:38:12.882032 6 log.go:172] (0xc0011d01e0) (3) Data frame handling I0510 21:38:12.883863 6 log.go:172] (0xc002c880b0) Data frame received for 1 I0510 21:38:12.883921 6 log.go:172] (0xc00147f7c0) (1) Data frame handling I0510 21:38:12.883953 6 log.go:172] (0xc00147f7c0) (1) Data frame sent I0510 21:38:12.883982 6 log.go:172] (0xc002c880b0) (0xc00147f7c0) Stream removed, broadcasting: 1 I0510 21:38:12.884039 6 log.go:172] (0xc002c880b0) Go away received I0510 21:38:12.884126 6 log.go:172] (0xc002c880b0) (0xc00147f7c0) Stream removed, broadcasting: 1 I0510 21:38:12.884158 6 log.go:172] (0xc002c880b0) (0xc0011d01e0) Stream removed, broadcasting: 3 I0510 21:38:12.884191 6 log.go:172] (0xc002c880b0) (0xc000cea000) Stream removed, broadcasting: 5 May 10 21:38:12.884: INFO: Waiting for responses: map[] May 10 21:38:12.887: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.252:8080/dial?request=hostname&protocol=http&host=10.244.2.169&port=8080&tries=1'] Namespace:pod-network-test-6775 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:38:12.887: INFO: >>> kubeConfig: /root/.kube/config I0510 21:38:12.917836 6 log.go:172] (0xc001994370) (0xc0011d0820) Create stream I0510 21:38:12.917859 6 log.go:172] (0xc001994370) (0xc0011d0820) Stream added, broadcasting: 1 I0510 21:38:12.919776 6 log.go:172] (0xc001994370) Reply frame received for 1 I0510 21:38:12.919840 6 log.go:172] (0xc001994370) (0xc001f1cf00) Create stream I0510 21:38:12.919865 6 log.go:172] (0xc001994370) (0xc001f1cf00) Stream added, broadcasting: 3 I0510 21:38:12.921264 6 log.go:172] (0xc001994370) Reply frame received for 3 I0510 21:38:12.921293 6 log.go:172] (0xc001994370) (0xc001f1cfa0) Create stream I0510 21:38:12.921307 6 log.go:172] (0xc001994370) (0xc001f1cfa0) Stream added, broadcasting: 5 I0510 21:38:12.922730 6 log.go:172] (0xc001994370) Reply frame received for 5 I0510 21:38:13.000022 6 log.go:172] (0xc001994370) Data frame received for 3 I0510 21:38:13.000047 6 log.go:172] (0xc001f1cf00) (3) Data frame handling I0510 21:38:13.000078 6 log.go:172] (0xc001f1cf00) (3) Data frame sent I0510 21:38:13.000406 6 log.go:172] (0xc001994370) Data frame received for 5 I0510 21:38:13.000435 6 log.go:172] (0xc001994370) Data frame received for 3 I0510 21:38:13.000469 6 log.go:172] (0xc001f1cf00) (3) Data frame handling I0510 21:38:13.000497 6 log.go:172] (0xc001f1cfa0) (5) Data frame handling I0510 21:38:13.001944 6 log.go:172] (0xc001994370) Data frame received for 1 I0510 21:38:13.001965 6 log.go:172] (0xc0011d0820) (1) Data frame handling I0510 21:38:13.001975 6 log.go:172] (0xc0011d0820) (1) Data frame sent I0510 21:38:13.001996 6 log.go:172] (0xc001994370) (0xc0011d0820) Stream removed, broadcasting: 1 I0510 21:38:13.002015 6 log.go:172] (0xc001994370) Go away received I0510 21:38:13.002128 6 log.go:172] (0xc001994370) (0xc0011d0820) Stream removed, broadcasting: 1 I0510 21:38:13.002146 6 log.go:172] (0xc001994370) (0xc001f1cf00) Stream removed, broadcasting: 3 I0510 21:38:13.002154 6 log.go:172] (0xc001994370) (0xc001f1cfa0) Stream removed, broadcasting: 5 May 10 21:38:13.002: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:38:13.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6775" for this suite. • [SLOW TEST:26.407 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:38:13.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:38:13.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d" in namespace "downward-api-1008" to be "success or failure" May 10 21:38:13.146: INFO: Pod "downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.919186ms May 10 21:38:15.362: INFO: Pod "downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23242845s May 10 21:38:17.451: INFO: Pod "downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321245134s STEP: Saw pod success May 10 21:38:17.451: INFO: Pod "downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d" satisfied condition "success or failure" May 10 21:38:17.454: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d container client-container: STEP: delete the pod May 10 21:38:17.498: INFO: Waiting for pod downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d to disappear May 10 21:38:17.558: INFO: Pod downwardapi-volume-94ef6c36-23d2-4cd0-817e-46b0bbca1e6d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:38:17.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1008" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1859,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:38:17.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 10 21:38:17.672: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:17.677: INFO: Number of nodes with available pods: 0 May 10 21:38:17.677: INFO: Node jerma-worker is running more than one daemon pod May 10 21:38:18.723: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:18.774: INFO: Number of nodes with available pods: 0 May 10 21:38:18.774: INFO: Node jerma-worker is running more than one daemon pod May 10 21:38:19.860: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:19.870: INFO: Number of nodes with available pods: 0 May 10 21:38:19.870: INFO: Node jerma-worker is running more than one daemon pod May 10 21:38:20.867: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:20.888: INFO: Number of nodes with available pods: 0 May 10 21:38:20.888: INFO: Node jerma-worker is running more than one daemon pod May 10 21:38:21.683: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:21.687: INFO: Number of nodes with available pods: 0 May 10 21:38:21.687: INFO: Node jerma-worker is running more than one daemon pod May 10 21:38:22.681: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:22.684: INFO: Number of nodes with available pods: 2 May 10 21:38:22.684: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 10 21:38:22.750: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:22.756: INFO: Number of nodes with available pods: 1 May 10 21:38:22.756: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:38:23.761: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:23.765: INFO: Number of nodes with available pods: 1 May 10 21:38:23.765: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:38:24.762: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:24.766: INFO: Number of nodes with available pods: 1 May 10 21:38:24.766: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:38:25.762: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:25.765: INFO: Number of nodes with available pods: 1 May 10 21:38:25.765: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:38:26.762: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:38:26.766: INFO: Number of nodes with available pods: 2 May 10 21:38:26.766: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4899, will wait for the garbage collector to delete the pods May 10 21:38:26.831: INFO: Deleting DaemonSet.extensions daemon-set took: 7.388713ms May 10 21:38:27.132: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.283497ms May 10 21:38:39.553: INFO: Number of nodes with available pods: 0 May 10 21:38:39.553: INFO: Number of running nodes: 0, number of available pods: 0 May 10 21:38:39.556: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4899/daemonsets","resourceVersion":"15067033"},"items":null} May 10 21:38:39.558: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4899/pods","resourceVersion":"15067033"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:38:39.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4899" for this suite. • [SLOW TEST:22.010 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":112,"skipped":1869,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:38:39.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:38:56.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9546" for this suite. • [SLOW TEST:17.108 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":113,"skipped":1883,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:38:56.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-a81c69af-d179-46d2-8296-44f761726ee7 STEP: Creating a pod to test consume configMaps May 10 21:38:56.893: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99" in namespace "projected-9407" to be "success or failure" May 10 21:38:56.947: INFO: Pod "pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99": Phase="Pending", Reason="", readiness=false. Elapsed: 53.41322ms May 10 21:38:58.951: INFO: Pod "pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057666041s May 10 21:39:00.955: INFO: Pod "pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06149119s May 10 21:39:02.959: INFO: Pod "pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066058911s STEP: Saw pod success May 10 21:39:02.959: INFO: Pod "pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99" satisfied condition "success or failure" May 10 21:39:02.963: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99 container projected-configmap-volume-test: STEP: delete the pod May 10 21:39:02.992: INFO: Waiting for pod pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99 to disappear May 10 21:39:03.011: INFO: Pod pod-projected-configmaps-f8e9d987-b8f7-470c-8136-46f4813a1a99 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:39:03.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9407" for this suite. • [SLOW TEST:6.334 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1890,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:39:03.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 10 21:39:07.330: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:39:07.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-157" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1894,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:39:07.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-adb6ab2d-0cae-4a50-b576-bb787b929439 STEP: Creating a pod to test consume configMaps May 10 21:39:07.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe" in namespace "configmap-6578" to be "success or failure" May 10 21:39:07.757: INFO: Pod "pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045991ms May 10 21:39:09.782: INFO: Pod "pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02910352s May 10 21:39:11.786: INFO: Pod "pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033251731s STEP: Saw pod success May 10 21:39:11.786: INFO: Pod "pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe" satisfied condition "success or failure" May 10 21:39:11.789: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe container configmap-volume-test: STEP: delete the pod May 10 21:39:11.820: INFO: Waiting for pod pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe to disappear May 10 21:39:11.835: INFO: Pod pod-configmaps-f360eeff-91d8-41d4-8845-3e5800e9c0fe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:39:11.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6578" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1934,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:39:11.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-kgph STEP: Creating a pod to test atomic-volume-subpath May 10 21:39:11.991: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kgph" in namespace "subpath-8718" to be "success or failure" May 10 21:39:12.003: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105509ms May 10 21:39:14.064: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07294434s May 10 21:39:16.068: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077462006s May 10 21:39:18.093: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 6.102916409s May 10 21:39:20.098: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 8.107281347s May 10 21:39:22.102: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 10.111653427s May 10 21:39:24.106: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 12.11577115s May 10 21:39:26.111: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 14.120250755s May 10 21:39:28.115: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 16.124150072s May 10 21:39:30.119: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 18.128331157s May 10 21:39:32.123: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 20.132417311s May 10 21:39:34.127: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 22.136834704s May 10 21:39:36.132: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Running", Reason="", readiness=true. Elapsed: 24.141316751s May 10 21:39:38.136: INFO: Pod "pod-subpath-test-downwardapi-kgph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.14506779s STEP: Saw pod success May 10 21:39:38.136: INFO: Pod "pod-subpath-test-downwardapi-kgph" satisfied condition "success or failure" May 10 21:39:38.138: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-kgph container test-container-subpath-downwardapi-kgph: STEP: delete the pod May 10 21:39:38.203: INFO: Waiting for pod pod-subpath-test-downwardapi-kgph to disappear May 10 21:39:38.219: INFO: Pod pod-subpath-test-downwardapi-kgph no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-kgph May 10 21:39:38.219: INFO: Deleting pod "pod-subpath-test-downwardapi-kgph" in namespace "subpath-8718" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:39:38.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8718" for this suite. • [SLOW TEST:26.375 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":117,"skipped":2004,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:39:38.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:39:38.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 10 21:39:38.487: INFO: stderr: "" May 10 21:39:38.487: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:39:38.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5586" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":118,"skipped":2020,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:39:38.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:39:39.951: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:39:41.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743579, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743579, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743580, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743579, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:39:44.994: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:39:55.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4304" for this suite. STEP: Destroying namespace "webhook-4304-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.885 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":119,"skipped":2030,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:39:55.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:39:59.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6775" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":120,"skipped":2032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:39:59.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:40:00.596: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:40:02.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743600, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724743600, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:40:05.724: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:40:06.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4082-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:40:07.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9829" for this suite. STEP: Destroying namespace "webhook-9829-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.395 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":121,"skipped":2060,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:40:08.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 10 21:40:22.239: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.239: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:22.277354 6 log.go:172] (0xc0028fe8f0) (0xc001a12140) Create stream I0510 21:40:22.277384 6 log.go:172] (0xc0028fe8f0) (0xc001a12140) Stream added, broadcasting: 1 I0510 21:40:22.278752 6 log.go:172] (0xc0028fe8f0) Reply frame received for 1 I0510 21:40:22.278800 6 log.go:172] (0xc0028fe8f0) (0xc0009ca000) Create stream I0510 21:40:22.278812 6 log.go:172] (0xc0028fe8f0) (0xc0009ca000) Stream added, broadcasting: 3 I0510 21:40:22.279871 6 log.go:172] (0xc0028fe8f0) Reply frame received for 3 I0510 21:40:22.279913 6 log.go:172] (0xc0028fe8f0) (0xc00141bcc0) Create stream I0510 21:40:22.279931 6 log.go:172] (0xc0028fe8f0) (0xc00141bcc0) Stream added, broadcasting: 5 I0510 21:40:22.280787 6 log.go:172] (0xc0028fe8f0) Reply frame received for 5 I0510 21:40:22.363275 6 log.go:172] (0xc0028fe8f0) Data frame received for 5 I0510 21:40:22.363328 6 log.go:172] (0xc00141bcc0) (5) Data frame handling I0510 21:40:22.363359 6 log.go:172] (0xc0028fe8f0) Data frame received for 3 I0510 21:40:22.363380 6 log.go:172] (0xc0009ca000) (3) Data frame handling I0510 21:40:22.363397 6 log.go:172] (0xc0009ca000) (3) Data frame sent I0510 21:40:22.363410 6 log.go:172] (0xc0028fe8f0) Data frame received for 3 I0510 21:40:22.363422 6 log.go:172] (0xc0009ca000) (3) Data frame handling I0510 21:40:22.365286 6 log.go:172] (0xc0028fe8f0) Data frame received for 1 I0510 21:40:22.365306 6 log.go:172] (0xc001a12140) (1) Data frame handling I0510 21:40:22.365317 6 log.go:172] (0xc001a12140) (1) Data frame sent I0510 21:40:22.365331 6 log.go:172] (0xc0028fe8f0) (0xc001a12140) Stream removed, broadcasting: 1 I0510 21:40:22.365363 6 log.go:172] (0xc0028fe8f0) Go away received I0510 21:40:22.365444 6 log.go:172] (0xc0028fe8f0) (0xc001a12140) Stream removed, broadcasting: 1 I0510 21:40:22.365471 6 log.go:172] (0xc0028fe8f0) (0xc0009ca000) Stream removed, broadcasting: 3 I0510 21:40:22.365491 6 log.go:172] (0xc0028fe8f0) (0xc00141bcc0) Stream removed, broadcasting: 5 May 10 21:40:22.365: INFO: Exec stderr: "" May 10 21:40:22.365: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.365: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:22.390329 6 log.go:172] (0xc00181c160) (0xc000d80aa0) Create stream I0510 21:40:22.390356 6 log.go:172] (0xc00181c160) (0xc000d80aa0) Stream added, broadcasting: 1 I0510 21:40:22.391979 6 log.go:172] (0xc00181c160) Reply frame received for 1 I0510 21:40:22.392022 6 log.go:172] (0xc00181c160) (0xc001a121e0) Create stream I0510 21:40:22.392049 6 log.go:172] (0xc00181c160) (0xc001a121e0) Stream added, broadcasting: 3 I0510 21:40:22.393072 6 log.go:172] (0xc00181c160) Reply frame received for 3 I0510 21:40:22.393325 6 log.go:172] (0xc00181c160) (0xc000d80b40) Create stream I0510 21:40:22.393356 6 log.go:172] (0xc00181c160) (0xc000d80b40) Stream added, broadcasting: 5 I0510 21:40:22.394277 6 log.go:172] (0xc00181c160) Reply frame received for 5 I0510 21:40:22.457342 6 log.go:172] (0xc00181c160) Data frame received for 3 I0510 21:40:22.457377 6 log.go:172] (0xc001a121e0) (3) Data frame handling I0510 21:40:22.457392 6 log.go:172] (0xc001a121e0) (3) Data frame sent I0510 21:40:22.457402 6 log.go:172] (0xc00181c160) Data frame received for 3 I0510 21:40:22.457410 6 log.go:172] (0xc001a121e0) (3) Data frame handling I0510 21:40:22.457434 6 log.go:172] (0xc00181c160) Data frame received for 5 I0510 21:40:22.457443 6 log.go:172] (0xc000d80b40) (5) Data frame handling I0510 21:40:22.458922 6 log.go:172] (0xc00181c160) Data frame received for 1 I0510 21:40:22.458945 6 log.go:172] (0xc000d80aa0) (1) Data frame handling I0510 21:40:22.458957 6 log.go:172] (0xc000d80aa0) (1) Data frame sent I0510 21:40:22.458972 6 log.go:172] (0xc00181c160) (0xc000d80aa0) Stream removed, broadcasting: 1 I0510 21:40:22.458982 6 log.go:172] (0xc00181c160) Go away received I0510 21:40:22.459099 6 log.go:172] (0xc00181c160) (0xc000d80aa0) Stream removed, broadcasting: 1 I0510 21:40:22.459112 6 log.go:172] (0xc00181c160) (0xc001a121e0) Stream removed, broadcasting: 3 I0510 21:40:22.459118 6 log.go:172] (0xc00181c160) (0xc000d80b40) Stream removed, broadcasting: 5 May 10 21:40:22.459: INFO: Exec stderr: "" May 10 21:40:22.459: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.459: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:22.491453 6 log.go:172] (0xc0028fef20) (0xc001a125a0) Create stream I0510 21:40:22.491489 6 log.go:172] (0xc0028fef20) (0xc001a125a0) Stream added, broadcasting: 1 I0510 21:40:22.493730 6 log.go:172] (0xc0028fef20) Reply frame received for 1 I0510 21:40:22.493789 6 log.go:172] (0xc0028fef20) (0xc0009ca0a0) Create stream I0510 21:40:22.493804 6 log.go:172] (0xc0028fef20) (0xc0009ca0a0) Stream added, broadcasting: 3 I0510 21:40:22.494795 6 log.go:172] (0xc0028fef20) Reply frame received for 3 I0510 21:40:22.494845 6 log.go:172] (0xc0028fef20) (0xc000d80d20) Create stream I0510 21:40:22.494861 6 log.go:172] (0xc0028fef20) (0xc000d80d20) Stream added, broadcasting: 5 I0510 21:40:22.495720 6 log.go:172] (0xc0028fef20) Reply frame received for 5 I0510 21:40:22.563437 6 log.go:172] (0xc0028fef20) Data frame received for 5 I0510 21:40:22.563481 6 log.go:172] (0xc0028fef20) Data frame received for 3 I0510 21:40:22.563512 6 log.go:172] (0xc0009ca0a0) (3) Data frame handling I0510 21:40:22.563527 6 log.go:172] (0xc0009ca0a0) (3) Data frame sent I0510 21:40:22.563538 6 log.go:172] (0xc0028fef20) Data frame received for 3 I0510 21:40:22.563549 6 log.go:172] (0xc0009ca0a0) (3) Data frame handling I0510 21:40:22.563588 6 log.go:172] (0xc000d80d20) (5) Data frame handling I0510 21:40:22.565316 6 log.go:172] (0xc0028fef20) Data frame received for 1 I0510 21:40:22.565341 6 log.go:172] (0xc001a125a0) (1) Data frame handling I0510 21:40:22.565358 6 log.go:172] (0xc001a125a0) (1) Data frame sent I0510 21:40:22.565376 6 log.go:172] (0xc0028fef20) (0xc001a125a0) Stream removed, broadcasting: 1 I0510 21:40:22.565390 6 log.go:172] (0xc0028fef20) Go away received I0510 21:40:22.565547 6 log.go:172] (0xc0028fef20) (0xc001a125a0) Stream removed, broadcasting: 1 I0510 21:40:22.565565 6 log.go:172] (0xc0028fef20) (0xc0009ca0a0) Stream removed, broadcasting: 3 I0510 21:40:22.565574 6 log.go:172] (0xc0028fef20) (0xc000d80d20) Stream removed, broadcasting: 5 May 10 21:40:22.565: INFO: Exec stderr: "" May 10 21:40:22.565: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.565: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:22.603075 6 log.go:172] (0xc0028ff550) (0xc001a12b40) Create stream I0510 21:40:22.603101 6 log.go:172] (0xc0028ff550) (0xc001a12b40) Stream added, broadcasting: 1 I0510 21:40:22.605044 6 log.go:172] (0xc0028ff550) Reply frame received for 1 I0510 21:40:22.605089 6 log.go:172] (0xc0028ff550) (0xc000d80fa0) Create stream I0510 21:40:22.605099 6 log.go:172] (0xc0028ff550) (0xc000d80fa0) Stream added, broadcasting: 3 I0510 21:40:22.606267 6 log.go:172] (0xc0028ff550) Reply frame received for 3 I0510 21:40:22.606314 6 log.go:172] (0xc0028ff550) (0xc0009ca280) Create stream I0510 21:40:22.606330 6 log.go:172] (0xc0028ff550) (0xc0009ca280) Stream added, broadcasting: 5 I0510 21:40:22.607289 6 log.go:172] (0xc0028ff550) Reply frame received for 5 I0510 21:40:22.664428 6 log.go:172] (0xc0028ff550) Data frame received for 5 I0510 21:40:22.664464 6 log.go:172] (0xc0009ca280) (5) Data frame handling I0510 21:40:22.664486 6 log.go:172] (0xc0028ff550) Data frame received for 3 I0510 21:40:22.664497 6 log.go:172] (0xc000d80fa0) (3) Data frame handling I0510 21:40:22.664508 6 log.go:172] (0xc000d80fa0) (3) Data frame sent I0510 21:40:22.664527 6 log.go:172] (0xc0028ff550) Data frame received for 3 I0510 21:40:22.664553 6 log.go:172] (0xc000d80fa0) (3) Data frame handling I0510 21:40:22.667033 6 log.go:172] (0xc0028ff550) Data frame received for 1 I0510 21:40:22.667068 6 log.go:172] (0xc001a12b40) (1) Data frame handling I0510 21:40:22.667081 6 log.go:172] (0xc001a12b40) (1) Data frame sent I0510 21:40:22.667099 6 log.go:172] (0xc0028ff550) (0xc001a12b40) Stream removed, broadcasting: 1 I0510 21:40:22.667117 6 log.go:172] (0xc0028ff550) Go away received I0510 21:40:22.667210 6 log.go:172] (0xc0028ff550) (0xc001a12b40) Stream removed, broadcasting: 1 I0510 21:40:22.667228 6 log.go:172] (0xc0028ff550) (0xc000d80fa0) Stream removed, broadcasting: 3 I0510 21:40:22.667235 6 log.go:172] (0xc0028ff550) (0xc0009ca280) Stream removed, broadcasting: 5 May 10 21:40:22.667: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 10 21:40:22.667: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.667: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:22.700108 6 log.go:172] (0xc001e0c370) (0xc0009ca780) Create stream I0510 21:40:22.700138 6 log.go:172] (0xc001e0c370) (0xc0009ca780) Stream added, broadcasting: 1 I0510 21:40:22.703062 6 log.go:172] (0xc001e0c370) Reply frame received for 1 I0510 21:40:22.703110 6 log.go:172] (0xc001e0c370) (0xc000d81040) Create stream I0510 21:40:22.703127 6 log.go:172] (0xc001e0c370) (0xc000d81040) Stream added, broadcasting: 3 I0510 21:40:22.704160 6 log.go:172] (0xc001e0c370) Reply frame received for 3 I0510 21:40:22.704181 6 log.go:172] (0xc001e0c370) (0xc000d810e0) Create stream I0510 21:40:22.704187 6 log.go:172] (0xc001e0c370) (0xc000d810e0) Stream added, broadcasting: 5 I0510 21:40:22.705564 6 log.go:172] (0xc001e0c370) Reply frame received for 5 I0510 21:40:22.778840 6 log.go:172] (0xc001e0c370) Data frame received for 3 I0510 21:40:22.778867 6 log.go:172] (0xc000d81040) (3) Data frame handling I0510 21:40:22.778875 6 log.go:172] (0xc000d81040) (3) Data frame sent I0510 21:40:22.778880 6 log.go:172] (0xc001e0c370) Data frame received for 3 I0510 21:40:22.778884 6 log.go:172] (0xc000d81040) (3) Data frame handling I0510 21:40:22.778900 6 log.go:172] (0xc001e0c370) Data frame received for 5 I0510 21:40:22.778916 6 log.go:172] (0xc000d810e0) (5) Data frame handling I0510 21:40:22.780421 6 log.go:172] (0xc001e0c370) Data frame received for 1 I0510 21:40:22.780457 6 log.go:172] (0xc0009ca780) (1) Data frame handling I0510 21:40:22.780559 6 log.go:172] (0xc0009ca780) (1) Data frame sent I0510 21:40:22.780651 6 log.go:172] (0xc001e0c370) (0xc0009ca780) Stream removed, broadcasting: 1 I0510 21:40:22.780740 6 log.go:172] (0xc001e0c370) Go away received I0510 21:40:22.780941 6 log.go:172] (0xc001e0c370) (0xc0009ca780) Stream removed, broadcasting: 1 I0510 21:40:22.780976 6 log.go:172] (0xc001e0c370) (0xc000d81040) Stream removed, broadcasting: 3 I0510 21:40:22.781002 6 log.go:172] (0xc001e0c370) (0xc000d810e0) Stream removed, broadcasting: 5 May 10 21:40:22.781: INFO: Exec stderr: "" May 10 21:40:22.781: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.781: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:22.809427 6 log.go:172] (0xc001e0c9a0) (0xc0009ca960) Create stream I0510 21:40:22.809469 6 log.go:172] (0xc001e0c9a0) (0xc0009ca960) Stream added, broadcasting: 1 I0510 21:40:22.811721 6 log.go:172] (0xc001e0c9a0) Reply frame received for 1 I0510 21:40:22.811766 6 log.go:172] (0xc001e0c9a0) (0xc000ceb220) Create stream I0510 21:40:22.811780 6 log.go:172] (0xc001e0c9a0) (0xc000ceb220) Stream added, broadcasting: 3 I0510 21:40:22.812739 6 log.go:172] (0xc001e0c9a0) Reply frame received for 3 I0510 21:40:22.812789 6 log.go:172] (0xc001e0c9a0) (0xc000d81180) Create stream I0510 21:40:22.812810 6 log.go:172] (0xc001e0c9a0) (0xc000d81180) Stream added, broadcasting: 5 I0510 21:40:22.813838 6 log.go:172] (0xc001e0c9a0) Reply frame received for 5 I0510 21:40:22.873875 6 log.go:172] (0xc001e0c9a0) Data frame received for 5 I0510 21:40:22.873909 6 log.go:172] (0xc000d81180) (5) Data frame handling I0510 21:40:22.873971 6 log.go:172] (0xc001e0c9a0) Data frame received for 3 I0510 21:40:22.874023 6 log.go:172] (0xc000ceb220) (3) Data frame handling I0510 21:40:22.874054 6 log.go:172] (0xc000ceb220) (3) Data frame sent I0510 21:40:22.874073 6 log.go:172] (0xc001e0c9a0) Data frame received for 3 I0510 21:40:22.874085 6 log.go:172] (0xc000ceb220) (3) Data frame handling I0510 21:40:22.875597 6 log.go:172] (0xc001e0c9a0) Data frame received for 1 I0510 21:40:22.875613 6 log.go:172] (0xc0009ca960) (1) Data frame handling I0510 21:40:22.875631 6 log.go:172] (0xc0009ca960) (1) Data frame sent I0510 21:40:22.875645 6 log.go:172] (0xc001e0c9a0) (0xc0009ca960) Stream removed, broadcasting: 1 I0510 21:40:22.875744 6 log.go:172] (0xc001e0c9a0) Go away received I0510 21:40:22.875780 6 log.go:172] (0xc001e0c9a0) (0xc0009ca960) Stream removed, broadcasting: 1 I0510 21:40:22.875812 6 log.go:172] (0xc001e0c9a0) (0xc000ceb220) Stream removed, broadcasting: 3 I0510 21:40:22.875828 6 log.go:172] (0xc001e0c9a0) (0xc000d81180) Stream removed, broadcasting: 5 May 10 21:40:22.875: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 10 21:40:22.875: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.875: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:22.904059 6 log.go:172] (0xc001e0cf20) (0xc0009caf00) Create stream I0510 21:40:22.904096 6 log.go:172] (0xc001e0cf20) (0xc0009caf00) Stream added, broadcasting: 1 I0510 21:40:22.906242 6 log.go:172] (0xc001e0cf20) Reply frame received for 1 I0510 21:40:22.906282 6 log.go:172] (0xc001e0cf20) (0xc001a12d20) Create stream I0510 21:40:22.906297 6 log.go:172] (0xc001e0cf20) (0xc001a12d20) Stream added, broadcasting: 3 I0510 21:40:22.907296 6 log.go:172] (0xc001e0cf20) Reply frame received for 3 I0510 21:40:22.907328 6 log.go:172] (0xc001e0cf20) (0xc000d81400) Create stream I0510 21:40:22.907336 6 log.go:172] (0xc001e0cf20) (0xc000d81400) Stream added, broadcasting: 5 I0510 21:40:22.908283 6 log.go:172] (0xc001e0cf20) Reply frame received for 5 I0510 21:40:22.984624 6 log.go:172] (0xc001e0cf20) Data frame received for 5 I0510 21:40:22.984663 6 log.go:172] (0xc000d81400) (5) Data frame handling I0510 21:40:22.984698 6 log.go:172] (0xc001e0cf20) Data frame received for 3 I0510 21:40:22.984737 6 log.go:172] (0xc001a12d20) (3) Data frame handling I0510 21:40:22.984770 6 log.go:172] (0xc001a12d20) (3) Data frame sent I0510 21:40:22.984787 6 log.go:172] (0xc001e0cf20) Data frame received for 3 I0510 21:40:22.984799 6 log.go:172] (0xc001a12d20) (3) Data frame handling I0510 21:40:22.986413 6 log.go:172] (0xc001e0cf20) Data frame received for 1 I0510 21:40:22.986447 6 log.go:172] (0xc0009caf00) (1) Data frame handling I0510 21:40:22.986518 6 log.go:172] (0xc0009caf00) (1) Data frame sent I0510 21:40:22.986541 6 log.go:172] (0xc001e0cf20) (0xc0009caf00) Stream removed, broadcasting: 1 I0510 21:40:22.986560 6 log.go:172] (0xc001e0cf20) Go away received I0510 21:40:22.986703 6 log.go:172] (0xc001e0cf20) (0xc0009caf00) Stream removed, broadcasting: 1 I0510 21:40:22.986728 6 log.go:172] (0xc001e0cf20) (0xc001a12d20) Stream removed, broadcasting: 3 I0510 21:40:22.986739 6 log.go:172] (0xc001e0cf20) (0xc000d81400) Stream removed, broadcasting: 5 May 10 21:40:22.986: INFO: Exec stderr: "" May 10 21:40:22.986: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:22.986: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:23.046063 6 log.go:172] (0xc002c88a50) (0xc000cebd60) Create stream I0510 21:40:23.046102 6 log.go:172] (0xc002c88a50) (0xc000cebd60) Stream added, broadcasting: 1 I0510 21:40:23.047958 6 log.go:172] (0xc002c88a50) Reply frame received for 1 I0510 21:40:23.048002 6 log.go:172] (0xc002c88a50) (0xc000cebe00) Create stream I0510 21:40:23.048021 6 log.go:172] (0xc002c88a50) (0xc000cebe00) Stream added, broadcasting: 3 I0510 21:40:23.049589 6 log.go:172] (0xc002c88a50) Reply frame received for 3 I0510 21:40:23.049630 6 log.go:172] (0xc002c88a50) (0xc000cebf40) Create stream I0510 21:40:23.049642 6 log.go:172] (0xc002c88a50) (0xc000cebf40) Stream added, broadcasting: 5 I0510 21:40:23.051031 6 log.go:172] (0xc002c88a50) Reply frame received for 5 I0510 21:40:23.111360 6 log.go:172] (0xc002c88a50) Data frame received for 3 I0510 21:40:23.111396 6 log.go:172] (0xc000cebe00) (3) Data frame handling I0510 21:40:23.111411 6 log.go:172] (0xc000cebe00) (3) Data frame sent I0510 21:40:23.111426 6 log.go:172] (0xc002c88a50) Data frame received for 3 I0510 21:40:23.111436 6 log.go:172] (0xc000cebe00) (3) Data frame handling I0510 21:40:23.111496 6 log.go:172] (0xc002c88a50) Data frame received for 5 I0510 21:40:23.111537 6 log.go:172] (0xc000cebf40) (5) Data frame handling I0510 21:40:23.112909 6 log.go:172] (0xc002c88a50) Data frame received for 1 I0510 21:40:23.112925 6 log.go:172] (0xc000cebd60) (1) Data frame handling I0510 21:40:23.112932 6 log.go:172] (0xc000cebd60) (1) Data frame sent I0510 21:40:23.112941 6 log.go:172] (0xc002c88a50) (0xc000cebd60) Stream removed, broadcasting: 1 I0510 21:40:23.112991 6 log.go:172] (0xc002c88a50) Go away received I0510 21:40:23.113021 6 log.go:172] (0xc002c88a50) (0xc000cebd60) Stream removed, broadcasting: 1 I0510 21:40:23.113084 6 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc000cebe00), 0x5:(*spdystream.Stream)(0xc000cebf40)} I0510 21:40:23.113305 6 log.go:172] (0xc002c88a50) (0xc000cebe00) Stream removed, broadcasting: 3 I0510 21:40:23.113322 6 log.go:172] (0xc002c88a50) (0xc000cebf40) Stream removed, broadcasting: 5 May 10 21:40:23.113: INFO: Exec stderr: "" May 10 21:40:23.113: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:23.113: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:23.182695 6 log.go:172] (0xc0019948f0) (0xc00141bf40) Create stream I0510 21:40:23.182722 6 log.go:172] (0xc0019948f0) (0xc00141bf40) Stream added, broadcasting: 1 I0510 21:40:23.185446 6 log.go:172] (0xc0019948f0) Reply frame received for 1 I0510 21:40:23.185499 6 log.go:172] (0xc0019948f0) (0xc000ff4500) Create stream I0510 21:40:23.185515 6 log.go:172] (0xc0019948f0) (0xc000ff4500) Stream added, broadcasting: 3 I0510 21:40:23.186601 6 log.go:172] (0xc0019948f0) Reply frame received for 3 I0510 21:40:23.186639 6 log.go:172] (0xc0019948f0) (0xc000ff45a0) Create stream I0510 21:40:23.186654 6 log.go:172] (0xc0019948f0) (0xc000ff45a0) Stream added, broadcasting: 5 I0510 21:40:23.187803 6 log.go:172] (0xc0019948f0) Reply frame received for 5 I0510 21:40:23.260597 6 log.go:172] (0xc0019948f0) Data frame received for 5 I0510 21:40:23.260623 6 log.go:172] (0xc000ff45a0) (5) Data frame handling I0510 21:40:23.260665 6 log.go:172] (0xc0019948f0) Data frame received for 3 I0510 21:40:23.260701 6 log.go:172] (0xc000ff4500) (3) Data frame handling I0510 21:40:23.260713 6 log.go:172] (0xc000ff4500) (3) Data frame sent I0510 21:40:23.260725 6 log.go:172] (0xc0019948f0) Data frame received for 3 I0510 21:40:23.260731 6 log.go:172] (0xc000ff4500) (3) Data frame handling I0510 21:40:23.261969 6 log.go:172] (0xc0019948f0) Data frame received for 1 I0510 21:40:23.261988 6 log.go:172] (0xc00141bf40) (1) Data frame handling I0510 21:40:23.262004 6 log.go:172] (0xc00141bf40) (1) Data frame sent I0510 21:40:23.262017 6 log.go:172] (0xc0019948f0) (0xc00141bf40) Stream removed, broadcasting: 1 I0510 21:40:23.262040 6 log.go:172] (0xc0019948f0) Go away received I0510 21:40:23.262132 6 log.go:172] (0xc0019948f0) (0xc00141bf40) Stream removed, broadcasting: 1 I0510 21:40:23.262179 6 log.go:172] (0xc0019948f0) (0xc000ff4500) Stream removed, broadcasting: 3 I0510 21:40:23.262194 6 log.go:172] (0xc0019948f0) (0xc000ff45a0) Stream removed, broadcasting: 5 May 10 21:40:23.262: INFO: Exec stderr: "" May 10 21:40:23.262: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-409 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:40:23.262: INFO: >>> kubeConfig: /root/.kube/config I0510 21:40:23.287062 6 log.go:172] (0xc0028ffb80) (0xc001a12f00) Create stream I0510 21:40:23.287091 6 log.go:172] (0xc0028ffb80) (0xc001a12f00) Stream added, broadcasting: 1 I0510 21:40:23.288932 6 log.go:172] (0xc0028ffb80) Reply frame received for 1 I0510 21:40:23.288971 6 log.go:172] (0xc0028ffb80) (0xc000d81680) Create stream I0510 21:40:23.288982 6 log.go:172] (0xc0028ffb80) (0xc000d81680) Stream added, broadcasting: 3 I0510 21:40:23.290171 6 log.go:172] (0xc0028ffb80) Reply frame received for 3 I0510 21:40:23.290217 6 log.go:172] (0xc0028ffb80) (0xc00118a320) Create stream I0510 21:40:23.290232 6 log.go:172] (0xc0028ffb80) (0xc00118a320) Stream added, broadcasting: 5 I0510 21:40:23.291141 6 log.go:172] (0xc0028ffb80) Reply frame received for 5 I0510 21:40:23.341844 6 log.go:172] (0xc0028ffb80) Data frame received for 5 I0510 21:40:23.341881 6 log.go:172] (0xc00118a320) (5) Data frame handling I0510 21:40:23.341906 6 log.go:172] (0xc0028ffb80) Data frame received for 3 I0510 21:40:23.341921 6 log.go:172] (0xc000d81680) (3) Data frame handling I0510 21:40:23.341934 6 log.go:172] (0xc000d81680) (3) Data frame sent I0510 21:40:23.341946 6 log.go:172] (0xc0028ffb80) Data frame received for 3 I0510 21:40:23.341957 6 log.go:172] (0xc000d81680) (3) Data frame handling I0510 21:40:23.343726 6 log.go:172] (0xc0028ffb80) Data frame received for 1 I0510 21:40:23.343750 6 log.go:172] (0xc001a12f00) (1) Data frame handling I0510 21:40:23.343766 6 log.go:172] (0xc001a12f00) (1) Data frame sent I0510 21:40:23.343785 6 log.go:172] (0xc0028ffb80) (0xc001a12f00) Stream removed, broadcasting: 1 I0510 21:40:23.343848 6 log.go:172] (0xc0028ffb80) Go away received I0510 21:40:23.343895 6 log.go:172] (0xc0028ffb80) (0xc001a12f00) Stream removed, broadcasting: 1 I0510 21:40:23.343936 6 log.go:172] (0xc0028ffb80) (0xc000d81680) Stream removed, broadcasting: 3 I0510 21:40:23.343954 6 log.go:172] (0xc0028ffb80) (0xc00118a320) Stream removed, broadcasting: 5 May 10 21:40:23.343: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:40:23.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-409" for this suite. • [SLOW TEST:15.298 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2079,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:40:23.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:40:23.466: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 22.766368ms) May 10 21:40:23.470: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.408246ms) May 10 21:40:23.473: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.614448ms) May 10 21:40:23.477: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.674049ms) May 10 21:40:23.480: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.106642ms) May 10 21:40:23.484: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.403004ms) May 10 21:40:23.487: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.591948ms) May 10 21:40:23.491: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.395741ms) May 10 21:40:23.494: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.664653ms) May 10 21:40:23.515: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 20.537894ms) May 10 21:40:23.519: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.093728ms) May 10 21:40:23.524: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.324452ms) May 10 21:40:23.529: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.02385ms) May 10 21:40:23.532: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.071496ms) May 10 21:40:23.534: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.644284ms) May 10 21:40:23.537: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.635011ms) May 10 21:40:23.540: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.543643ms) May 10 21:40:23.542: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.799227ms) May 10 21:40:23.545: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.816293ms) May 10 21:40:23.549: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.406583ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:40:23.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3623" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":123,"skipped":2082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:40:23.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 10 21:40:23.671: INFO: Waiting up to 5m0s for pod "var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01" in namespace "var-expansion-7650" to be "success or failure" May 10 21:40:23.676: INFO: Pod "var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090758ms May 10 21:40:25.682: INFO: Pod "var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010927157s May 10 21:40:27.687: INFO: Pod "var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015326055s STEP: Saw pod success May 10 21:40:27.687: INFO: Pod "var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01" satisfied condition "success or failure" May 10 21:40:27.690: INFO: Trying to get logs from node jerma-worker pod var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01 container dapi-container: STEP: delete the pod May 10 21:40:27.773: INFO: Waiting for pod var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01 to disappear May 10 21:40:27.783: INFO: Pod var-expansion-5683ee23-b57c-43d0-a6ac-a759ecdfef01 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:40:27.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7650" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2139,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:40:27.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:40:28.024: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:40:34.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3529" for this suite. • [SLOW TEST:6.471 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":125,"skipped":2143,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:40:34.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5251 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5251 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5251 May 10 21:40:34.370: INFO: Found 0 stateful pods, waiting for 1 May 10 21:40:44.375: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 10 21:40:44.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:40:47.440: INFO: stderr: "I0510 21:40:47.277917 969 log.go:172] (0xc0005e3130) (0xc0005d9f40) Create stream\nI0510 21:40:47.277953 969 log.go:172] (0xc0005e3130) (0xc0005d9f40) Stream added, broadcasting: 1\nI0510 21:40:47.280474 969 log.go:172] (0xc0005e3130) Reply frame received for 1\nI0510 21:40:47.280528 969 log.go:172] (0xc0005e3130) (0xc00027c780) Create stream\nI0510 21:40:47.280550 969 log.go:172] (0xc0005e3130) (0xc00027c780) Stream added, broadcasting: 3\nI0510 21:40:47.281663 969 log.go:172] (0xc0005e3130) Reply frame received for 3\nI0510 21:40:47.281686 969 log.go:172] (0xc0005e3130) (0xc0003be3c0) Create stream\nI0510 21:40:47.281701 969 log.go:172] (0xc0005e3130) (0xc0003be3c0) Stream added, broadcasting: 5\nI0510 21:40:47.282655 969 log.go:172] (0xc0005e3130) Reply frame received for 5\nI0510 21:40:47.402520 969 log.go:172] (0xc0005e3130) Data frame received for 5\nI0510 21:40:47.402539 969 log.go:172] (0xc0003be3c0) (5) Data frame handling\nI0510 21:40:47.402549 969 log.go:172] (0xc0003be3c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:40:47.433309 969 log.go:172] (0xc0005e3130) Data frame received for 3\nI0510 21:40:47.433333 969 log.go:172] (0xc00027c780) (3) Data frame handling\nI0510 21:40:47.433346 969 log.go:172] (0xc00027c780) (3) Data frame sent\nI0510 21:40:47.433379 969 log.go:172] (0xc0005e3130) Data frame received for 3\nI0510 21:40:47.433392 969 log.go:172] (0xc00027c780) (3) Data frame handling\nI0510 21:40:47.433634 969 log.go:172] (0xc0005e3130) Data frame received for 5\nI0510 21:40:47.433665 969 log.go:172] (0xc0003be3c0) (5) Data frame handling\nI0510 21:40:47.435204 969 log.go:172] (0xc0005e3130) Data frame received for 1\nI0510 21:40:47.435218 969 log.go:172] (0xc0005d9f40) (1) Data frame handling\nI0510 21:40:47.435234 969 log.go:172] (0xc0005d9f40) (1) Data frame sent\nI0510 21:40:47.435323 969 log.go:172] (0xc0005e3130) (0xc0005d9f40) Stream removed, broadcasting: 1\nI0510 21:40:47.435365 969 log.go:172] (0xc0005e3130) Go away received\nI0510 21:40:47.435739 969 log.go:172] (0xc0005e3130) (0xc0005d9f40) Stream removed, broadcasting: 1\nI0510 21:40:47.435761 969 log.go:172] (0xc0005e3130) (0xc00027c780) Stream removed, broadcasting: 3\nI0510 21:40:47.435773 969 log.go:172] (0xc0005e3130) (0xc0003be3c0) Stream removed, broadcasting: 5\n" May 10 21:40:47.440: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:40:47.440: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:40:47.444: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 10 21:40:57.472: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 10 21:40:57.472: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:40:57.490: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:40:57.491: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC }] May 10 21:40:57.491: INFO: May 10 21:40:57.491: INFO: StatefulSet ss has not reached scale 3, at 1 May 10 21:40:58.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989001425s May 10 21:40:59.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984112964s May 10 21:41:00.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979955006s May 10 21:41:01.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964783923s May 10 21:41:02.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.899269692s May 10 21:41:03.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.826885715s May 10 21:41:04.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.822400952s May 10 21:41:05.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.818097555s May 10 21:41:06.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 812.949129ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5251 May 10 21:41:07.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:41:07.881: INFO: stderr: "I0510 21:41:07.812638 999 log.go:172] (0xc000104bb0) (0xc0006d5c20) Create stream\nI0510 21:41:07.812694 999 log.go:172] (0xc000104bb0) (0xc0006d5c20) Stream added, broadcasting: 1\nI0510 21:41:07.814584 999 log.go:172] (0xc000104bb0) Reply frame received for 1\nI0510 21:41:07.814624 999 log.go:172] (0xc000104bb0) (0xc000986000) Create stream\nI0510 21:41:07.814647 999 log.go:172] (0xc000104bb0) (0xc000986000) Stream added, broadcasting: 3\nI0510 21:41:07.815304 999 log.go:172] (0xc000104bb0) Reply frame received for 3\nI0510 21:41:07.815330 999 log.go:172] (0xc000104bb0) (0xc00021d4a0) Create stream\nI0510 21:41:07.815384 999 log.go:172] (0xc000104bb0) (0xc00021d4a0) Stream added, broadcasting: 5\nI0510 21:41:07.816246 999 log.go:172] (0xc000104bb0) Reply frame received for 5\nI0510 21:41:07.875614 999 log.go:172] (0xc000104bb0) Data frame received for 5\nI0510 21:41:07.875636 999 log.go:172] (0xc00021d4a0) (5) Data frame handling\nI0510 21:41:07.875643 999 log.go:172] (0xc00021d4a0) (5) Data frame sent\nI0510 21:41:07.875649 999 log.go:172] (0xc000104bb0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 21:41:07.875653 999 log.go:172] (0xc00021d4a0) (5) Data frame handling\nI0510 21:41:07.875684 999 log.go:172] (0xc000104bb0) Data frame received for 3\nI0510 21:41:07.875698 999 log.go:172] (0xc000986000) (3) Data frame handling\nI0510 21:41:07.875708 999 log.go:172] (0xc000986000) (3) Data frame sent\nI0510 21:41:07.875719 999 log.go:172] (0xc000104bb0) Data frame received for 3\nI0510 21:41:07.875730 999 log.go:172] (0xc000986000) (3) Data frame handling\nI0510 21:41:07.876752 999 log.go:172] (0xc000104bb0) Data frame received for 1\nI0510 21:41:07.876772 999 log.go:172] (0xc0006d5c20) (1) Data frame handling\nI0510 21:41:07.876791 999 log.go:172] (0xc0006d5c20) (1) Data frame sent\nI0510 21:41:07.876804 999 log.go:172] (0xc000104bb0) (0xc0006d5c20) Stream removed, broadcasting: 1\nI0510 21:41:07.876818 999 log.go:172] (0xc000104bb0) Go away received\nI0510 21:41:07.877278 999 log.go:172] (0xc000104bb0) (0xc0006d5c20) Stream removed, broadcasting: 1\nI0510 21:41:07.877293 999 log.go:172] (0xc000104bb0) (0xc000986000) Stream removed, broadcasting: 3\nI0510 21:41:07.877302 999 log.go:172] (0xc000104bb0) (0xc00021d4a0) Stream removed, broadcasting: 5\n" May 10 21:41:07.882: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:41:07.882: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:41:07.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:41:08.066: INFO: stderr: "I0510 21:41:07.990833 1019 log.go:172] (0xc000ab4210) (0xc000aaabe0) Create stream\nI0510 21:41:07.990882 1019 log.go:172] (0xc000ab4210) (0xc000aaabe0) Stream added, broadcasting: 1\nI0510 21:41:07.994924 1019 log.go:172] (0xc000ab4210) Reply frame received for 1\nI0510 21:41:07.994987 1019 log.go:172] (0xc000ab4210) (0xc0005b8640) Create stream\nI0510 21:41:07.995016 1019 log.go:172] (0xc000ab4210) (0xc0005b8640) Stream added, broadcasting: 3\nI0510 21:41:07.995892 1019 log.go:172] (0xc000ab4210) Reply frame received for 3\nI0510 21:41:07.995912 1019 log.go:172] (0xc000ab4210) (0xc000319400) Create stream\nI0510 21:41:07.995920 1019 log.go:172] (0xc000ab4210) (0xc000319400) Stream added, broadcasting: 5\nI0510 21:41:07.996810 1019 log.go:172] (0xc000ab4210) Reply frame received for 5\nI0510 21:41:08.058888 1019 log.go:172] (0xc000ab4210) Data frame received for 3\nI0510 21:41:08.058931 1019 log.go:172] (0xc0005b8640) (3) Data frame handling\nI0510 21:41:08.058956 1019 log.go:172] (0xc0005b8640) (3) Data frame sent\nI0510 21:41:08.058969 1019 log.go:172] (0xc000ab4210) Data frame received for 3\nI0510 21:41:08.058995 1019 log.go:172] (0xc000ab4210) Data frame received for 5\nI0510 21:41:08.059026 1019 log.go:172] (0xc000319400) (5) Data frame handling\nI0510 21:41:08.059048 1019 log.go:172] (0xc000319400) (5) Data frame sent\nI0510 21:41:08.059076 1019 log.go:172] (0xc000ab4210) Data frame received for 5\nI0510 21:41:08.059087 1019 log.go:172] (0xc000319400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0510 21:41:08.059115 1019 log.go:172] (0xc0005b8640) (3) Data frame handling\nI0510 21:41:08.060644 1019 log.go:172] (0xc000ab4210) Data frame received for 1\nI0510 21:41:08.060676 1019 log.go:172] (0xc000aaabe0) (1) Data frame handling\nI0510 21:41:08.060699 1019 log.go:172] (0xc000aaabe0) (1) Data frame sent\nI0510 21:41:08.060727 1019 log.go:172] (0xc000ab4210) (0xc000aaabe0) Stream removed, broadcasting: 1\nI0510 21:41:08.060760 1019 log.go:172] (0xc000ab4210) Go away received\nI0510 21:41:08.061471 1019 log.go:172] (0xc000ab4210) (0xc000aaabe0) Stream removed, broadcasting: 1\nI0510 21:41:08.061495 1019 log.go:172] (0xc000ab4210) (0xc0005b8640) Stream removed, broadcasting: 3\nI0510 21:41:08.061507 1019 log.go:172] (0xc000ab4210) (0xc000319400) Stream removed, broadcasting: 5\n" May 10 21:41:08.066: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:41:08.066: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:41:08.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:41:08.289: INFO: stderr: "I0510 21:41:08.208183 1040 log.go:172] (0xc000574370) (0xc00091ab40) Create stream\nI0510 21:41:08.208232 1040 log.go:172] (0xc000574370) (0xc00091ab40) Stream added, broadcasting: 1\nI0510 21:41:08.211395 1040 log.go:172] (0xc000574370) Reply frame received for 1\nI0510 21:41:08.211432 1040 log.go:172] (0xc000574370) (0xc00069a640) Create stream\nI0510 21:41:08.211442 1040 log.go:172] (0xc000574370) (0xc00069a640) Stream added, broadcasting: 3\nI0510 21:41:08.212248 1040 log.go:172] (0xc000574370) Reply frame received for 3\nI0510 21:41:08.212272 1040 log.go:172] (0xc000574370) (0xc000425400) Create stream\nI0510 21:41:08.212279 1040 log.go:172] (0xc000574370) (0xc000425400) Stream added, broadcasting: 5\nI0510 21:41:08.213339 1040 log.go:172] (0xc000574370) Reply frame received for 5\nI0510 21:41:08.282445 1040 log.go:172] (0xc000574370) Data frame received for 5\nI0510 21:41:08.282491 1040 log.go:172] (0xc000425400) (5) Data frame handling\nI0510 21:41:08.282507 1040 log.go:172] (0xc000425400) (5) Data frame sent\nI0510 21:41:08.282518 1040 log.go:172] (0xc000574370) Data frame received for 5\nI0510 21:41:08.282529 1040 log.go:172] (0xc000425400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0510 21:41:08.282570 1040 log.go:172] (0xc000574370) Data frame received for 3\nI0510 21:41:08.282589 1040 log.go:172] (0xc00069a640) (3) Data frame handling\nI0510 21:41:08.282605 1040 log.go:172] (0xc00069a640) (3) Data frame sent\nI0510 21:41:08.282624 1040 log.go:172] (0xc000574370) Data frame received for 3\nI0510 21:41:08.282638 1040 log.go:172] (0xc00069a640) (3) Data frame handling\nI0510 21:41:08.284547 1040 log.go:172] (0xc000574370) Data frame received for 1\nI0510 21:41:08.284569 1040 log.go:172] (0xc00091ab40) (1) Data frame handling\nI0510 21:41:08.284605 1040 log.go:172] (0xc00091ab40) (1) Data frame sent\nI0510 21:41:08.284646 1040 log.go:172] (0xc000574370) (0xc00091ab40) Stream removed, broadcasting: 1\nI0510 21:41:08.284762 1040 log.go:172] (0xc000574370) Go away received\nI0510 21:41:08.285307 1040 log.go:172] (0xc000574370) (0xc00091ab40) Stream removed, broadcasting: 1\nI0510 21:41:08.285332 1040 log.go:172] (0xc000574370) (0xc00069a640) Stream removed, broadcasting: 3\nI0510 21:41:08.285343 1040 log.go:172] (0xc000574370) (0xc000425400) Stream removed, broadcasting: 5\n" May 10 21:41:08.289: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:41:08.289: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:41:08.299: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 10 21:41:08.299: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 10 21:41:08.299: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 10 21:41:08.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:41:08.513: INFO: stderr: "I0510 21:41:08.423618 1061 log.go:172] (0xc000ade2c0) (0xc00055c6e0) Create stream\nI0510 21:41:08.423667 1061 log.go:172] (0xc000ade2c0) (0xc00055c6e0) Stream added, broadcasting: 1\nI0510 21:41:08.425839 1061 log.go:172] (0xc000ade2c0) Reply frame received for 1\nI0510 21:41:08.425876 1061 log.go:172] (0xc000ade2c0) (0xc0005ae1e0) Create stream\nI0510 21:41:08.425888 1061 log.go:172] (0xc000ade2c0) (0xc0005ae1e0) Stream added, broadcasting: 3\nI0510 21:41:08.426908 1061 log.go:172] (0xc000ade2c0) Reply frame received for 3\nI0510 21:41:08.426965 1061 log.go:172] (0xc000ade2c0) (0xc0005ae280) Create stream\nI0510 21:41:08.426983 1061 log.go:172] (0xc000ade2c0) (0xc0005ae280) Stream added, broadcasting: 5\nI0510 21:41:08.427910 1061 log.go:172] (0xc000ade2c0) Reply frame received for 5\nI0510 21:41:08.506329 1061 log.go:172] (0xc000ade2c0) Data frame received for 5\nI0510 21:41:08.506365 1061 log.go:172] (0xc0005ae280) (5) Data frame handling\nI0510 21:41:08.506384 1061 log.go:172] (0xc0005ae280) (5) Data frame sent\nI0510 21:41:08.506407 1061 log.go:172] (0xc000ade2c0) Data frame received for 5\nI0510 21:41:08.506431 1061 log.go:172] (0xc0005ae280) (5) Data frame handling\nI0510 21:41:08.506449 1061 log.go:172] (0xc000ade2c0) Data frame received for 3\nI0510 21:41:08.506458 1061 log.go:172] (0xc0005ae1e0) (3) Data frame handling\nI0510 21:41:08.506476 1061 log.go:172] (0xc0005ae1e0) (3) Data frame sent\nI0510 21:41:08.506496 1061 log.go:172] (0xc000ade2c0) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:41:08.506505 1061 log.go:172] (0xc0005ae1e0) (3) Data frame handling\nI0510 21:41:08.507943 1061 log.go:172] (0xc000ade2c0) Data frame received for 1\nI0510 21:41:08.507975 1061 log.go:172] (0xc00055c6e0) (1) Data frame handling\nI0510 21:41:08.508001 1061 log.go:172] (0xc00055c6e0) (1) Data frame sent\nI0510 21:41:08.508031 1061 log.go:172] (0xc000ade2c0) (0xc00055c6e0) Stream removed, broadcasting: 1\nI0510 21:41:08.508055 1061 log.go:172] (0xc000ade2c0) Go away received\nI0510 21:41:08.508488 1061 log.go:172] (0xc000ade2c0) (0xc00055c6e0) Stream removed, broadcasting: 1\nI0510 21:41:08.508513 1061 log.go:172] (0xc000ade2c0) (0xc0005ae1e0) Stream removed, broadcasting: 3\nI0510 21:41:08.508528 1061 log.go:172] (0xc000ade2c0) (0xc0005ae280) Stream removed, broadcasting: 5\n" May 10 21:41:08.513: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:41:08.513: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:41:08.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:41:08.778: INFO: stderr: "I0510 21:41:08.644269 1082 log.go:172] (0xc0009e00b0) (0xc0006ec000) Create stream\nI0510 21:41:08.644328 1082 log.go:172] (0xc0009e00b0) (0xc0006ec000) Stream added, broadcasting: 1\nI0510 21:41:08.646390 1082 log.go:172] (0xc0009e00b0) Reply frame received for 1\nI0510 21:41:08.646433 1082 log.go:172] (0xc0009e00b0) (0xc000919680) Create stream\nI0510 21:41:08.646445 1082 log.go:172] (0xc0009e00b0) (0xc000919680) Stream added, broadcasting: 3\nI0510 21:41:08.647307 1082 log.go:172] (0xc0009e00b0) Reply frame received for 3\nI0510 21:41:08.647335 1082 log.go:172] (0xc0009e00b0) (0xc000919720) Create stream\nI0510 21:41:08.647347 1082 log.go:172] (0xc0009e00b0) (0xc000919720) Stream added, broadcasting: 5\nI0510 21:41:08.648254 1082 log.go:172] (0xc0009e00b0) Reply frame received for 5\nI0510 21:41:08.724907 1082 log.go:172] (0xc0009e00b0) Data frame received for 5\nI0510 21:41:08.724939 1082 log.go:172] (0xc000919720) (5) Data frame handling\nI0510 21:41:08.724957 1082 log.go:172] (0xc000919720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:41:08.770891 1082 log.go:172] (0xc0009e00b0) Data frame received for 3\nI0510 21:41:08.770942 1082 log.go:172] (0xc000919680) (3) Data frame handling\nI0510 21:41:08.770977 1082 log.go:172] (0xc000919680) (3) Data frame sent\nI0510 21:41:08.771132 1082 log.go:172] (0xc0009e00b0) Data frame received for 5\nI0510 21:41:08.771175 1082 log.go:172] (0xc000919720) (5) Data frame handling\nI0510 21:41:08.771203 1082 log.go:172] (0xc0009e00b0) Data frame received for 3\nI0510 21:41:08.771216 1082 log.go:172] (0xc000919680) (3) Data frame handling\nI0510 21:41:08.773318 1082 log.go:172] (0xc0009e00b0) Data frame received for 1\nI0510 21:41:08.773345 1082 log.go:172] (0xc0006ec000) (1) Data frame handling\nI0510 21:41:08.773365 1082 log.go:172] (0xc0006ec000) (1) Data frame sent\nI0510 21:41:08.773388 1082 log.go:172] (0xc0009e00b0) (0xc0006ec000) Stream removed, broadcasting: 1\nI0510 21:41:08.773419 1082 log.go:172] (0xc0009e00b0) Go away received\nI0510 21:41:08.773689 1082 log.go:172] (0xc0009e00b0) (0xc0006ec000) Stream removed, broadcasting: 1\nI0510 21:41:08.773704 1082 log.go:172] (0xc0009e00b0) (0xc000919680) Stream removed, broadcasting: 3\nI0510 21:41:08.773712 1082 log.go:172] (0xc0009e00b0) (0xc000919720) Stream removed, broadcasting: 5\n" May 10 21:41:08.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:41:08.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:41:08.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:41:09.011: INFO: stderr: "I0510 21:41:08.901267 1100 log.go:172] (0xc000b23600) (0xc000a92640) Create stream\nI0510 21:41:08.901330 1100 log.go:172] (0xc000b23600) (0xc000a92640) Stream added, broadcasting: 1\nI0510 21:41:08.906286 1100 log.go:172] (0xc000b23600) Reply frame received for 1\nI0510 21:41:08.906332 1100 log.go:172] (0xc000b23600) (0xc000829c20) Create stream\nI0510 21:41:08.906348 1100 log.go:172] (0xc000b23600) (0xc000829c20) Stream added, broadcasting: 3\nI0510 21:41:08.907237 1100 log.go:172] (0xc000b23600) Reply frame received for 3\nI0510 21:41:08.907283 1100 log.go:172] (0xc000b23600) (0xc000829cc0) Create stream\nI0510 21:41:08.907297 1100 log.go:172] (0xc000b23600) (0xc000829cc0) Stream added, broadcasting: 5\nI0510 21:41:08.908150 1100 log.go:172] (0xc000b23600) Reply frame received for 5\nI0510 21:41:08.971501 1100 log.go:172] (0xc000b23600) Data frame received for 5\nI0510 21:41:08.971525 1100 log.go:172] (0xc000829cc0) (5) Data frame handling\nI0510 21:41:08.971539 1100 log.go:172] (0xc000829cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:41:09.006593 1100 log.go:172] (0xc000b23600) Data frame received for 5\nI0510 21:41:09.006641 1100 log.go:172] (0xc000829cc0) (5) Data frame handling\nI0510 21:41:09.006666 1100 log.go:172] (0xc000b23600) Data frame received for 3\nI0510 21:41:09.006704 1100 log.go:172] (0xc000829c20) (3) Data frame handling\nI0510 21:41:09.006743 1100 log.go:172] (0xc000829c20) (3) Data frame sent\nI0510 21:41:09.006767 1100 log.go:172] (0xc000b23600) Data frame received for 3\nI0510 21:41:09.006777 1100 log.go:172] (0xc000829c20) (3) Data frame handling\nI0510 21:41:09.007568 1100 log.go:172] (0xc000b23600) Data frame received for 1\nI0510 21:41:09.007590 1100 log.go:172] (0xc000a92640) (1) Data frame handling\nI0510 21:41:09.007604 1100 log.go:172] (0xc000a92640) (1) Data frame sent\nI0510 21:41:09.007696 1100 log.go:172] (0xc000b23600) (0xc000a92640) Stream removed, broadcasting: 1\nI0510 21:41:09.007756 1100 log.go:172] (0xc000b23600) Go away received\nI0510 21:41:09.007914 1100 log.go:172] (0xc000b23600) (0xc000a92640) Stream removed, broadcasting: 1\nI0510 21:41:09.007927 1100 log.go:172] (0xc000b23600) (0xc000829c20) Stream removed, broadcasting: 3\nI0510 21:41:09.007933 1100 log.go:172] (0xc000b23600) (0xc000829cc0) Stream removed, broadcasting: 5\n" May 10 21:41:09.012: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:41:09.012: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:41:09.012: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:41:09.059: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 10 21:41:19.065: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 10 21:41:19.065: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 10 21:41:19.065: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 10 21:41:19.074: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:19.074: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC }] May 10 21:41:19.074: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:19.074: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:19.074: INFO: May 10 21:41:19.074: INFO: StatefulSet ss has not reached scale 0, at 3 May 10 21:41:20.120: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:20.120: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC }] May 10 21:41:20.120: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:20.120: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:20.120: INFO: May 10 21:41:20.120: INFO: StatefulSet ss has not reached scale 0, at 3 May 10 21:41:21.150: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:21.150: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:34 +0000 UTC }] May 10 21:41:21.150: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:21.150: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:21.150: INFO: May 10 21:41:21.150: INFO: StatefulSet ss has not reached scale 0, at 3 May 10 21:41:22.154: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:22.154: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:22.154: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:22.154: INFO: May 10 21:41:22.154: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 21:41:23.158: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:23.158: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:23.158: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:23.158: INFO: May 10 21:41:23.158: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 21:41:24.163: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:24.163: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:24.163: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:24.163: INFO: May 10 21:41:24.163: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 21:41:25.167: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:25.167: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:25.167: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:25.167: INFO: May 10 21:41:25.167: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 21:41:26.172: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:26.172: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:26.172: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:26.172: INFO: May 10 21:41:26.172: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 21:41:27.179: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:27.179: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:27.179: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:27.179: INFO: May 10 21:41:27.179: INFO: StatefulSet ss has not reached scale 0, at 2 May 10 21:41:28.194: INFO: POD NODE PHASE GRACE CONDITIONS May 10 21:41:28.195: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:28.195: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:41:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-10 21:40:57 +0000 UTC }] May 10 21:41:28.195: INFO: May 10 21:41:28.195: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5251 May 10 21:41:29.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:41:29.336: INFO: rc: 1 May 10 21:41:29.336: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 10 21:41:39.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:41:39.444: INFO: rc: 1 May 10 21:41:39.444: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:41:49.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:41:49.612: INFO: rc: 1 May 10 21:41:49.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:41:59.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:41:59.720: INFO: rc: 1 May 10 21:41:59.720: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:42:09.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:42:09.828: INFO: rc: 1 May 10 21:42:09.828: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:42:19.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:42:19.929: INFO: rc: 1 May 10 21:42:19.929: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:42:29.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:42:30.025: INFO: rc: 1 May 10 21:42:30.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:42:40.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:42:40.125: INFO: rc: 1 May 10 21:42:40.125: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:42:50.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:42:50.218: INFO: rc: 1 May 10 21:42:50.218: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:43:00.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:43:00.314: INFO: rc: 1 May 10 21:43:00.314: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:43:10.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:43:10.446: INFO: rc: 1 May 10 21:43:10.446: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:43:20.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:43:20.559: INFO: rc: 1 May 10 21:43:20.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:43:30.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:43:30.722: INFO: rc: 1 May 10 21:43:30.722: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:43:40.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:43:40.836: INFO: rc: 1 May 10 21:43:40.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:43:50.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:43:50.943: INFO: rc: 1 May 10 21:43:50.943: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:44:00.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:44:01.045: INFO: rc: 1 May 10 21:44:01.045: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:44:11.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:44:11.138: INFO: rc: 1 May 10 21:44:11.138: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:44:21.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:44:21.237: INFO: rc: 1 May 10 21:44:21.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:44:31.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:44:31.407: INFO: rc: 1 May 10 21:44:31.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:44:41.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:44:41.503: INFO: rc: 1 May 10 21:44:41.503: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:44:51.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:44:51.599: INFO: rc: 1 May 10 21:44:51.599: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:45:01.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:45:01.718: INFO: rc: 1 May 10 21:45:01.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:45:11.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:45:11.811: INFO: rc: 1 May 10 21:45:11.811: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:45:21.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:45:21.915: INFO: rc: 1 May 10 21:45:21.915: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:45:31.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:45:32.023: INFO: rc: 1 May 10 21:45:32.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:45:42.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:45:42.131: INFO: rc: 1 May 10 21:45:42.131: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:45:52.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:45:52.233: INFO: rc: 1 May 10 21:45:52.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:46:02.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:46:02.336: INFO: rc: 1 May 10 21:46:02.336: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:46:12.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:46:12.437: INFO: rc: 1 May 10 21:46:12.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:46:22.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:46:22.523: INFO: rc: 1 May 10 21:46:22.523: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 10 21:46:32.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5251 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:46:32.632: INFO: rc: 1 May 10 21:46:32.632: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 10 21:46:32.632: INFO: Scaling statefulset ss to 0 May 10 21:46:32.640: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 10 21:46:32.642: INFO: Deleting all statefulset in ns statefulset-5251 May 10 21:46:32.644: INFO: Scaling statefulset ss to 0 May 10 21:46:32.651: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:46:32.653: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:46:32.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5251" for this suite. • [SLOW TEST:358.466 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":126,"skipped":2145,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:46:32.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:46:49.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1269" for this suite. • [SLOW TEST:16.421 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":127,"skipped":2147,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:46:49.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:47:05.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5673" for this suite. • [SLOW TEST:16.136 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":128,"skipped":2162,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:47:05.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5389 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5389 I0510 21:47:05.519378 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5389, replica count: 2 I0510 21:47:08.570495 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 21:47:11.570811 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 21:47:11.570: INFO: Creating new exec pod May 10 21:47:16.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5389 execpodvknkr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 10 21:47:16.812: INFO: stderr: "I0510 21:47:16.718264 1737 log.go:172] (0xc000979130) (0xc000a025a0) Create stream\nI0510 21:47:16.718302 1737 log.go:172] (0xc000979130) (0xc000a025a0) Stream added, broadcasting: 1\nI0510 21:47:16.722149 1737 log.go:172] (0xc000979130) Reply frame received for 1\nI0510 21:47:16.722197 1737 log.go:172] (0xc000979130) (0xc0005e26e0) Create stream\nI0510 21:47:16.722217 1737 log.go:172] (0xc000979130) (0xc0005e26e0) Stream added, broadcasting: 3\nI0510 21:47:16.722926 1737 log.go:172] (0xc000979130) Reply frame received for 3\nI0510 21:47:16.722965 1737 log.go:172] (0xc000979130) (0xc00077d4a0) Create stream\nI0510 21:47:16.722980 1737 log.go:172] (0xc000979130) (0xc00077d4a0) Stream added, broadcasting: 5\nI0510 21:47:16.723669 1737 log.go:172] (0xc000979130) Reply frame received for 5\nI0510 21:47:16.805857 1737 log.go:172] (0xc000979130) Data frame received for 5\nI0510 21:47:16.805922 1737 log.go:172] (0xc00077d4a0) (5) Data frame handling\nI0510 21:47:16.805963 1737 log.go:172] (0xc00077d4a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0510 21:47:16.806250 1737 log.go:172] (0xc000979130) Data frame received for 5\nI0510 21:47:16.806311 1737 log.go:172] (0xc00077d4a0) (5) Data frame handling\nI0510 21:47:16.806331 1737 log.go:172] (0xc00077d4a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0510 21:47:16.806670 1737 log.go:172] (0xc000979130) Data frame received for 5\nI0510 21:47:16.806707 1737 log.go:172] (0xc00077d4a0) (5) Data frame handling\nI0510 21:47:16.806838 1737 log.go:172] (0xc000979130) Data frame received for 3\nI0510 21:47:16.806877 1737 log.go:172] (0xc0005e26e0) (3) Data frame handling\nI0510 21:47:16.808568 1737 log.go:172] (0xc000979130) Data frame received for 1\nI0510 21:47:16.808589 1737 log.go:172] (0xc000a025a0) (1) Data frame handling\nI0510 21:47:16.808602 1737 log.go:172] (0xc000a025a0) (1) Data frame sent\nI0510 21:47:16.808615 1737 log.go:172] (0xc000979130) (0xc000a025a0) Stream removed, broadcasting: 1\nI0510 21:47:16.808633 1737 log.go:172] (0xc000979130) Go away received\nI0510 21:47:16.808911 1737 log.go:172] (0xc000979130) (0xc000a025a0) Stream removed, broadcasting: 1\nI0510 21:47:16.808921 1737 log.go:172] (0xc000979130) (0xc0005e26e0) Stream removed, broadcasting: 3\nI0510 21:47:16.808926 1737 log.go:172] (0xc000979130) (0xc00077d4a0) Stream removed, broadcasting: 5\n" May 10 21:47:16.812: INFO: stdout: "" May 10 21:47:16.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5389 execpodvknkr -- /bin/sh -x -c nc -zv -t -w 2 10.101.96.65 80' May 10 21:47:17.012: INFO: stderr: "I0510 21:47:16.944327 1758 log.go:172] (0xc0009549a0) (0xc0009be0a0) Create stream\nI0510 21:47:16.944376 1758 log.go:172] (0xc0009549a0) (0xc0009be0a0) Stream added, broadcasting: 1\nI0510 21:47:16.946765 1758 log.go:172] (0xc0009549a0) Reply frame received for 1\nI0510 21:47:16.946810 1758 log.go:172] (0xc0009549a0) (0xc0009be140) Create stream\nI0510 21:47:16.946830 1758 log.go:172] (0xc0009549a0) (0xc0009be140) Stream added, broadcasting: 3\nI0510 21:47:16.947611 1758 log.go:172] (0xc0009549a0) Reply frame received for 3\nI0510 21:47:16.947663 1758 log.go:172] (0xc0009549a0) (0xc0009be280) Create stream\nI0510 21:47:16.947693 1758 log.go:172] (0xc0009549a0) (0xc0009be280) Stream added, broadcasting: 5\nI0510 21:47:16.948490 1758 log.go:172] (0xc0009549a0) Reply frame received for 5\nI0510 21:47:17.006264 1758 log.go:172] (0xc0009549a0) Data frame received for 3\nI0510 21:47:17.006294 1758 log.go:172] (0xc0009be140) (3) Data frame handling\nI0510 21:47:17.006318 1758 log.go:172] (0xc0009549a0) Data frame received for 5\nI0510 21:47:17.006334 1758 log.go:172] (0xc0009be280) (5) Data frame handling\nI0510 21:47:17.006348 1758 log.go:172] (0xc0009be280) (5) Data frame sent\nI0510 21:47:17.006362 1758 log.go:172] (0xc0009549a0) Data frame received for 5\nI0510 21:47:17.006387 1758 log.go:172] (0xc0009be280) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.96.65 80\nConnection to 10.101.96.65 80 port [tcp/http] succeeded!\nI0510 21:47:17.007701 1758 log.go:172] (0xc0009549a0) Data frame received for 1\nI0510 21:47:17.007719 1758 log.go:172] (0xc0009be0a0) (1) Data frame handling\nI0510 21:47:17.007735 1758 log.go:172] (0xc0009be0a0) (1) Data frame sent\nI0510 21:47:17.007745 1758 log.go:172] (0xc0009549a0) (0xc0009be0a0) Stream removed, broadcasting: 1\nI0510 21:47:17.007769 1758 log.go:172] (0xc0009549a0) Go away received\nI0510 21:47:17.007991 1758 log.go:172] (0xc0009549a0) (0xc0009be0a0) Stream removed, broadcasting: 1\nI0510 21:47:17.008005 1758 log.go:172] (0xc0009549a0) (0xc0009be140) Stream removed, broadcasting: 3\nI0510 21:47:17.008013 1758 log.go:172] (0xc0009549a0) (0xc0009be280) Stream removed, broadcasting: 5\n" May 10 21:47:17.012: INFO: stdout: "" May 10 21:47:17.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5389 execpodvknkr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30089' May 10 21:47:17.208: INFO: stderr: "I0510 21:47:17.142628 1779 log.go:172] (0xc0009b48f0) (0xc000978320) Create stream\nI0510 21:47:17.142686 1779 log.go:172] (0xc0009b48f0) (0xc000978320) Stream added, broadcasting: 1\nI0510 21:47:17.145991 1779 log.go:172] (0xc0009b48f0) Reply frame received for 1\nI0510 21:47:17.146029 1779 log.go:172] (0xc0009b48f0) (0xc000622780) Create stream\nI0510 21:47:17.146047 1779 log.go:172] (0xc0009b48f0) (0xc000622780) Stream added, broadcasting: 3\nI0510 21:47:17.146920 1779 log.go:172] (0xc0009b48f0) Reply frame received for 3\nI0510 21:47:17.146953 1779 log.go:172] (0xc0009b48f0) (0xc0006f9540) Create stream\nI0510 21:47:17.146966 1779 log.go:172] (0xc0009b48f0) (0xc0006f9540) Stream added, broadcasting: 5\nI0510 21:47:17.147765 1779 log.go:172] (0xc0009b48f0) Reply frame received for 5\nI0510 21:47:17.200926 1779 log.go:172] (0xc0009b48f0) Data frame received for 3\nI0510 21:47:17.200962 1779 log.go:172] (0xc000622780) (3) Data frame handling\nI0510 21:47:17.201063 1779 log.go:172] (0xc0009b48f0) Data frame received for 5\nI0510 21:47:17.201079 1779 log.go:172] (0xc0006f9540) (5) Data frame handling\nI0510 21:47:17.201095 1779 log.go:172] (0xc0006f9540) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30089\nConnection to 172.17.0.10 30089 port [tcp/30089] succeeded!\nI0510 21:47:17.201317 1779 log.go:172] (0xc0009b48f0) Data frame received for 5\nI0510 21:47:17.201572 1779 log.go:172] (0xc0006f9540) (5) Data frame handling\nI0510 21:47:17.202963 1779 log.go:172] (0xc0009b48f0) Data frame received for 1\nI0510 21:47:17.202992 1779 log.go:172] (0xc000978320) (1) Data frame handling\nI0510 21:47:17.203006 1779 log.go:172] (0xc000978320) (1) Data frame sent\nI0510 21:47:17.203024 1779 log.go:172] (0xc0009b48f0) (0xc000978320) Stream removed, broadcasting: 1\nI0510 21:47:17.203083 1779 log.go:172] (0xc0009b48f0) Go away received\nI0510 21:47:17.203485 1779 log.go:172] (0xc0009b48f0) (0xc000978320) Stream removed, broadcasting: 1\nI0510 21:47:17.203510 1779 log.go:172] (0xc0009b48f0) (0xc000622780) Stream removed, broadcasting: 3\nI0510 21:47:17.203523 1779 log.go:172] (0xc0009b48f0) (0xc0006f9540) Stream removed, broadcasting: 5\n" May 10 21:47:17.208: INFO: stdout: "" May 10 21:47:17.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5389 execpodvknkr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30089' May 10 21:47:17.464: INFO: stderr: "I0510 21:47:17.321560 1800 log.go:172] (0xc0000f51e0) (0xc000659a40) Create stream\nI0510 21:47:17.321611 1800 log.go:172] (0xc0000f51e0) (0xc000659a40) Stream added, broadcasting: 1\nI0510 21:47:17.323817 1800 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0510 21:47:17.323852 1800 log.go:172] (0xc0000f51e0) (0xc00094e000) Create stream\nI0510 21:47:17.323860 1800 log.go:172] (0xc0000f51e0) (0xc00094e000) Stream added, broadcasting: 3\nI0510 21:47:17.324772 1800 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0510 21:47:17.324800 1800 log.go:172] (0xc0000f51e0) (0xc0009f0000) Create stream\nI0510 21:47:17.324810 1800 log.go:172] (0xc0000f51e0) (0xc0009f0000) Stream added, broadcasting: 5\nI0510 21:47:17.325863 1800 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0510 21:47:17.460862 1800 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0510 21:47:17.460902 1800 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0510 21:47:17.460927 1800 log.go:172] (0xc00094e000) (3) Data frame handling\nI0510 21:47:17.460954 1800 log.go:172] (0xc0009f0000) (5) Data frame handling\nI0510 21:47:17.460971 1800 log.go:172] (0xc0009f0000) (5) Data frame sent\nI0510 21:47:17.460985 1800 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0510 21:47:17.460993 1800 log.go:172] (0xc0009f0000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30089\nConnection to 172.17.0.8 30089 port [tcp/30089] succeeded!\nI0510 21:47:17.461670 1800 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0510 21:47:17.461682 1800 log.go:172] (0xc000659a40) (1) Data frame handling\nI0510 21:47:17.461689 1800 log.go:172] (0xc000659a40) (1) Data frame sent\nI0510 21:47:17.461697 1800 log.go:172] (0xc0000f51e0) (0xc000659a40) Stream removed, broadcasting: 1\nI0510 21:47:17.461817 1800 log.go:172] (0xc0000f51e0) Go away received\nI0510 21:47:17.461904 1800 log.go:172] (0xc0000f51e0) (0xc000659a40) Stream removed, broadcasting: 1\nI0510 21:47:17.461926 1800 log.go:172] (0xc0000f51e0) (0xc00094e000) Stream removed, broadcasting: 3\nI0510 21:47:17.461936 1800 log.go:172] (0xc0000f51e0) (0xc0009f0000) Stream removed, broadcasting: 5\n" May 10 21:47:17.465: INFO: stdout: "" May 10 21:47:17.465: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:47:17.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5389" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.259 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":129,"skipped":2177,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:47:17.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4843.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:47:25.664: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.667: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.670: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.673: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.683: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.686: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.688: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.691: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:25.698: INFO: Lookups using dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local] May 10 21:47:30.701: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.704: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.707: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.710: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.718: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.720: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.722: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.723: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:30.728: INFO: Lookups using dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local] May 10 21:47:35.702: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.705: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.708: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.711: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.717: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.719: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.721: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.723: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:35.727: INFO: Lookups using dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local] May 10 21:47:40.703: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.707: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.710: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.713: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.722: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.725: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.727: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.730: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:40.736: INFO: Lookups using dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local] May 10 21:47:45.702: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.705: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.708: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.710: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.735: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.738: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.741: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.743: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:45.755: INFO: Lookups using dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local] May 10 21:47:50.703: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.707: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.710: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.713: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.721: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.724: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.726: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.729: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local from pod dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490: the server could not find the requested resource (get pods dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490) May 10 21:47:50.735: INFO: Lookups using dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4843.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4843.svc.cluster.local jessie_udp@dns-test-service-2.dns-4843.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4843.svc.cluster.local] May 10 21:47:55.732: INFO: DNS probes using dns-4843/dns-test-a9b0fd32-39d1-4711-9da2-20bb85afd490 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:47:56.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4843" for this suite. • [SLOW TEST:39.155 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":130,"skipped":2196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:47:56.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 10 21:47:56.777: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:48:09.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2812" for this suite. • [SLOW TEST:12.784 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2225,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:48:09.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 10 21:48:14.648: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:48:15.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-703" for this suite. • [SLOW TEST:6.202 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":132,"skipped":2228,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:48:15.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2883 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2883 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2883 May 10 21:48:16.015: INFO: Found 0 stateful pods, waiting for 1 May 10 21:48:26.020: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 10 21:48:26.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:48:26.302: INFO: stderr: "I0510 21:48:26.162580 1822 log.go:172] (0xc000104a50) (0xc00068b9a0) Create stream\nI0510 21:48:26.162635 1822 log.go:172] (0xc000104a50) (0xc00068b9a0) Stream added, broadcasting: 1\nI0510 21:48:26.165069 1822 log.go:172] (0xc000104a50) Reply frame received for 1\nI0510 21:48:26.165292 1822 log.go:172] (0xc000104a50) (0xc000a30000) Create stream\nI0510 21:48:26.165324 1822 log.go:172] (0xc000104a50) (0xc000a30000) Stream added, broadcasting: 3\nI0510 21:48:26.166357 1822 log.go:172] (0xc000104a50) Reply frame received for 3\nI0510 21:48:26.166429 1822 log.go:172] (0xc000104a50) (0xc000a300a0) Create stream\nI0510 21:48:26.166451 1822 log.go:172] (0xc000104a50) (0xc000a300a0) Stream added, broadcasting: 5\nI0510 21:48:26.167597 1822 log.go:172] (0xc000104a50) Reply frame received for 5\nI0510 21:48:26.258502 1822 log.go:172] (0xc000104a50) Data frame received for 5\nI0510 21:48:26.258544 1822 log.go:172] (0xc000a300a0) (5) Data frame handling\nI0510 21:48:26.258574 1822 log.go:172] (0xc000a300a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:48:26.294531 1822 log.go:172] (0xc000104a50) Data frame received for 3\nI0510 21:48:26.294566 1822 log.go:172] (0xc000a30000) (3) Data frame handling\nI0510 21:48:26.294595 1822 log.go:172] (0xc000a30000) (3) Data frame sent\nI0510 21:48:26.294970 1822 log.go:172] (0xc000104a50) Data frame received for 3\nI0510 21:48:26.295004 1822 log.go:172] (0xc000a30000) (3) Data frame handling\nI0510 21:48:26.295029 1822 log.go:172] (0xc000104a50) Data frame received for 5\nI0510 21:48:26.295050 1822 log.go:172] (0xc000a300a0) (5) Data frame handling\nI0510 21:48:26.296978 1822 log.go:172] (0xc000104a50) Data frame received for 1\nI0510 21:48:26.297015 1822 log.go:172] (0xc00068b9a0) (1) Data frame handling\nI0510 21:48:26.297043 1822 log.go:172] (0xc00068b9a0) (1) Data frame sent\nI0510 21:48:26.297066 1822 log.go:172] (0xc000104a50) (0xc00068b9a0) Stream removed, broadcasting: 1\nI0510 21:48:26.297096 1822 log.go:172] (0xc000104a50) Go away received\nI0510 21:48:26.297734 1822 log.go:172] (0xc000104a50) (0xc00068b9a0) Stream removed, broadcasting: 1\nI0510 21:48:26.297756 1822 log.go:172] (0xc000104a50) (0xc000a30000) Stream removed, broadcasting: 3\nI0510 21:48:26.297774 1822 log.go:172] (0xc000104a50) (0xc000a300a0) Stream removed, broadcasting: 5\n" May 10 21:48:26.302: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:48:26.302: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:48:26.306: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 10 21:48:36.311: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 10 21:48:36.311: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:48:36.330: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999616s May 10 21:48:37.373: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990181764s May 10 21:48:38.383: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.947529128s May 10 21:48:39.387: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.937764118s May 10 21:48:40.392: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.933085741s May 10 21:48:41.396: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.928466302s May 10 21:48:42.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.924070243s May 10 21:48:43.405: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.919461309s May 10 21:48:44.410: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.915197959s May 10 21:48:45.415: INFO: Verifying statefulset ss doesn't scale past 1 for another 910.59314ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2883 May 10 21:48:46.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:48:46.655: INFO: stderr: "I0510 21:48:46.553385 1841 log.go:172] (0xc000a6a840) (0xc000a8e3c0) Create stream\nI0510 21:48:46.553440 1841 log.go:172] (0xc000a6a840) (0xc000a8e3c0) Stream added, broadcasting: 1\nI0510 21:48:46.557856 1841 log.go:172] (0xc000a6a840) Reply frame received for 1\nI0510 21:48:46.557901 1841 log.go:172] (0xc000a6a840) (0xc0005f6640) Create stream\nI0510 21:48:46.557913 1841 log.go:172] (0xc000a6a840) (0xc0005f6640) Stream added, broadcasting: 3\nI0510 21:48:46.558959 1841 log.go:172] (0xc000a6a840) Reply frame received for 3\nI0510 21:48:46.558990 1841 log.go:172] (0xc000a6a840) (0xc0001bf400) Create stream\nI0510 21:48:46.559003 1841 log.go:172] (0xc000a6a840) (0xc0001bf400) Stream added, broadcasting: 5\nI0510 21:48:46.559925 1841 log.go:172] (0xc000a6a840) Reply frame received for 5\nI0510 21:48:46.646272 1841 log.go:172] (0xc000a6a840) Data frame received for 3\nI0510 21:48:46.646319 1841 log.go:172] (0xc0005f6640) (3) Data frame handling\nI0510 21:48:46.646367 1841 log.go:172] (0xc0005f6640) (3) Data frame sent\nI0510 21:48:46.646411 1841 log.go:172] (0xc000a6a840) Data frame received for 3\nI0510 21:48:46.646435 1841 log.go:172] (0xc0005f6640) (3) Data frame handling\nI0510 21:48:46.646454 1841 log.go:172] (0xc000a6a840) Data frame received for 5\nI0510 21:48:46.646469 1841 log.go:172] (0xc0001bf400) (5) Data frame handling\nI0510 21:48:46.646501 1841 log.go:172] (0xc0001bf400) (5) Data frame sent\nI0510 21:48:46.646521 1841 log.go:172] (0xc000a6a840) Data frame received for 5\nI0510 21:48:46.646532 1841 log.go:172] (0xc0001bf400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 21:48:46.648560 1841 log.go:172] (0xc000a6a840) Data frame received for 1\nI0510 21:48:46.648675 1841 log.go:172] (0xc000a8e3c0) (1) Data frame handling\nI0510 21:48:46.648703 1841 log.go:172] (0xc000a8e3c0) (1) Data frame sent\nI0510 21:48:46.648725 1841 log.go:172] (0xc000a6a840) (0xc000a8e3c0) Stream removed, broadcasting: 1\nI0510 21:48:46.648752 1841 log.go:172] (0xc000a6a840) Go away received\nI0510 21:48:46.649469 1841 log.go:172] (0xc000a6a840) (0xc000a8e3c0) Stream removed, broadcasting: 1\nI0510 21:48:46.649495 1841 log.go:172] (0xc000a6a840) (0xc0005f6640) Stream removed, broadcasting: 3\nI0510 21:48:46.649508 1841 log.go:172] (0xc000a6a840) (0xc0001bf400) Stream removed, broadcasting: 5\n" May 10 21:48:46.655: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:48:46.655: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:48:46.659: INFO: Found 1 stateful pods, waiting for 3 May 10 21:48:56.663: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 10 21:48:56.663: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 10 21:48:56.663: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 10 21:48:56.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:48:56.905: INFO: stderr: "I0510 21:48:56.806729 1861 log.go:172] (0xc0009e40b0) (0xc0009aa000) Create stream\nI0510 21:48:56.806816 1861 log.go:172] (0xc0009e40b0) (0xc0009aa000) Stream added, broadcasting: 1\nI0510 21:48:56.809704 1861 log.go:172] (0xc0009e40b0) Reply frame received for 1\nI0510 21:48:56.809986 1861 log.go:172] (0xc0009e40b0) (0xc000a0a000) Create stream\nI0510 21:48:56.810020 1861 log.go:172] (0xc0009e40b0) (0xc000a0a000) Stream added, broadcasting: 3\nI0510 21:48:56.811357 1861 log.go:172] (0xc0009e40b0) Reply frame received for 3\nI0510 21:48:56.811403 1861 log.go:172] (0xc0009e40b0) (0xc0009ce000) Create stream\nI0510 21:48:56.811436 1861 log.go:172] (0xc0009e40b0) (0xc0009ce000) Stream added, broadcasting: 5\nI0510 21:48:56.812820 1861 log.go:172] (0xc0009e40b0) Reply frame received for 5\nI0510 21:48:56.898052 1861 log.go:172] (0xc0009e40b0) Data frame received for 5\nI0510 21:48:56.898105 1861 log.go:172] (0xc0009ce000) (5) Data frame handling\nI0510 21:48:56.898128 1861 log.go:172] (0xc0009ce000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:48:56.898198 1861 log.go:172] (0xc0009e40b0) Data frame received for 3\nI0510 21:48:56.898240 1861 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0510 21:48:56.898267 1861 log.go:172] (0xc000a0a000) (3) Data frame sent\nI0510 21:48:56.898289 1861 log.go:172] (0xc0009e40b0) Data frame received for 3\nI0510 21:48:56.898311 1861 log.go:172] (0xc000a0a000) (3) Data frame handling\nI0510 21:48:56.898405 1861 log.go:172] (0xc0009e40b0) Data frame received for 5\nI0510 21:48:56.898431 1861 log.go:172] (0xc0009ce000) (5) Data frame handling\nI0510 21:48:56.900136 1861 log.go:172] (0xc0009e40b0) Data frame received for 1\nI0510 21:48:56.900175 1861 log.go:172] (0xc0009aa000) (1) Data frame handling\nI0510 21:48:56.900209 1861 log.go:172] (0xc0009aa000) (1) Data frame sent\nI0510 21:48:56.900266 1861 log.go:172] (0xc0009e40b0) (0xc0009aa000) Stream removed, broadcasting: 1\nI0510 21:48:56.900303 1861 log.go:172] (0xc0009e40b0) Go away received\nI0510 21:48:56.900739 1861 log.go:172] (0xc0009e40b0) (0xc0009aa000) Stream removed, broadcasting: 1\nI0510 21:48:56.900768 1861 log.go:172] (0xc0009e40b0) (0xc000a0a000) Stream removed, broadcasting: 3\nI0510 21:48:56.900781 1861 log.go:172] (0xc0009e40b0) (0xc0009ce000) Stream removed, broadcasting: 5\n" May 10 21:48:56.905: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:48:56.905: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:48:56.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:48:57.151: INFO: stderr: "I0510 21:48:57.047878 1881 log.go:172] (0xc000b42a50) (0xc0004f4000) Create stream\nI0510 21:48:57.047935 1881 log.go:172] (0xc000b42a50) (0xc0004f4000) Stream added, broadcasting: 1\nI0510 21:48:57.050402 1881 log.go:172] (0xc000b42a50) Reply frame received for 1\nI0510 21:48:57.050450 1881 log.go:172] (0xc000b42a50) (0xc0009f8000) Create stream\nI0510 21:48:57.050465 1881 log.go:172] (0xc000b42a50) (0xc0009f8000) Stream added, broadcasting: 3\nI0510 21:48:57.051712 1881 log.go:172] (0xc000b42a50) Reply frame received for 3\nI0510 21:48:57.051773 1881 log.go:172] (0xc000b42a50) (0xc0004f4140) Create stream\nI0510 21:48:57.051792 1881 log.go:172] (0xc000b42a50) (0xc0004f4140) Stream added, broadcasting: 5\nI0510 21:48:57.052837 1881 log.go:172] (0xc000b42a50) Reply frame received for 5\nI0510 21:48:57.108491 1881 log.go:172] (0xc000b42a50) Data frame received for 5\nI0510 21:48:57.108515 1881 log.go:172] (0xc0004f4140) (5) Data frame handling\nI0510 21:48:57.108526 1881 log.go:172] (0xc0004f4140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:48:57.141659 1881 log.go:172] (0xc000b42a50) Data frame received for 3\nI0510 21:48:57.141698 1881 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0510 21:48:57.141733 1881 log.go:172] (0xc0009f8000) (3) Data frame sent\nI0510 21:48:57.141753 1881 log.go:172] (0xc000b42a50) Data frame received for 3\nI0510 21:48:57.141774 1881 log.go:172] (0xc0009f8000) (3) Data frame handling\nI0510 21:48:57.142114 1881 log.go:172] (0xc000b42a50) Data frame received for 5\nI0510 21:48:57.142135 1881 log.go:172] (0xc0004f4140) (5) Data frame handling\nI0510 21:48:57.144243 1881 log.go:172] (0xc000b42a50) Data frame received for 1\nI0510 21:48:57.144360 1881 log.go:172] (0xc0004f4000) (1) Data frame handling\nI0510 21:48:57.144464 1881 log.go:172] (0xc0004f4000) (1) Data frame sent\nI0510 21:48:57.144499 1881 log.go:172] (0xc000b42a50) (0xc0004f4000) Stream removed, broadcasting: 1\nI0510 21:48:57.144820 1881 log.go:172] (0xc000b42a50) Go away received\nI0510 21:48:57.144991 1881 log.go:172] (0xc000b42a50) (0xc0004f4000) Stream removed, broadcasting: 1\nI0510 21:48:57.145031 1881 log.go:172] (0xc000b42a50) (0xc0009f8000) Stream removed, broadcasting: 3\nI0510 21:48:57.145071 1881 log.go:172] (0xc000b42a50) (0xc0004f4140) Stream removed, broadcasting: 5\n" May 10 21:48:57.151: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:48:57.151: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:48:57.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 10 21:48:57.557: INFO: stderr: "I0510 21:48:57.332076 1902 log.go:172] (0xc000a14000) (0xc0005b06e0) Create stream\nI0510 21:48:57.332156 1902 log.go:172] (0xc000a14000) (0xc0005b06e0) Stream added, broadcasting: 1\nI0510 21:48:57.335196 1902 log.go:172] (0xc000a14000) Reply frame received for 1\nI0510 21:48:57.335261 1902 log.go:172] (0xc000a14000) (0xc0007994a0) Create stream\nI0510 21:48:57.335279 1902 log.go:172] (0xc000a14000) (0xc0007994a0) Stream added, broadcasting: 3\nI0510 21:48:57.336237 1902 log.go:172] (0xc000a14000) Reply frame received for 3\nI0510 21:48:57.336282 1902 log.go:172] (0xc000a14000) (0xc0009fe000) Create stream\nI0510 21:48:57.336302 1902 log.go:172] (0xc000a14000) (0xc0009fe000) Stream added, broadcasting: 5\nI0510 21:48:57.337649 1902 log.go:172] (0xc000a14000) Reply frame received for 5\nI0510 21:48:57.509288 1902 log.go:172] (0xc000a14000) Data frame received for 5\nI0510 21:48:57.509325 1902 log.go:172] (0xc0009fe000) (5) Data frame handling\nI0510 21:48:57.509347 1902 log.go:172] (0xc0009fe000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0510 21:48:57.548625 1902 log.go:172] (0xc000a14000) Data frame received for 3\nI0510 21:48:57.548652 1902 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0510 21:48:57.548671 1902 log.go:172] (0xc0007994a0) (3) Data frame sent\nI0510 21:48:57.548950 1902 log.go:172] (0xc000a14000) Data frame received for 5\nI0510 21:48:57.549030 1902 log.go:172] (0xc0009fe000) (5) Data frame handling\nI0510 21:48:57.549063 1902 log.go:172] (0xc000a14000) Data frame received for 3\nI0510 21:48:57.549074 1902 log.go:172] (0xc0007994a0) (3) Data frame handling\nI0510 21:48:57.550498 1902 log.go:172] (0xc000a14000) Data frame received for 1\nI0510 21:48:57.550521 1902 log.go:172] (0xc0005b06e0) (1) Data frame handling\nI0510 21:48:57.550550 1902 log.go:172] (0xc0005b06e0) (1) Data frame sent\nI0510 21:48:57.550629 1902 log.go:172] (0xc000a14000) (0xc0005b06e0) Stream removed, broadcasting: 1\nI0510 21:48:57.550666 1902 log.go:172] (0xc000a14000) Go away received\nI0510 21:48:57.551099 1902 log.go:172] (0xc000a14000) (0xc0005b06e0) Stream removed, broadcasting: 1\nI0510 21:48:57.551120 1902 log.go:172] (0xc000a14000) (0xc0007994a0) Stream removed, broadcasting: 3\nI0510 21:48:57.551132 1902 log.go:172] (0xc000a14000) (0xc0009fe000) Stream removed, broadcasting: 5\n" May 10 21:48:57.558: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 10 21:48:57.558: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 10 21:48:57.558: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:48:57.561: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 10 21:49:07.568: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 10 21:49:07.568: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 10 21:49:07.568: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 10 21:49:07.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999489s May 10 21:49:08.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99156488s May 10 21:49:09.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982209859s May 10 21:49:10.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.972279187s May 10 21:49:11.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.968435155s May 10 21:49:12.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.963980486s May 10 21:49:13.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.959522709s May 10 21:49:14.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.954794388s May 10 21:49:15.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.950102004s May 10 21:49:16.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 854.940205ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2883 May 10 21:49:17.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:49:17.955: INFO: stderr: "I0510 21:49:17.878704 1923 log.go:172] (0xc0000f73f0) (0xc0006dba40) Create stream\nI0510 21:49:17.878766 1923 log.go:172] (0xc0000f73f0) (0xc0006dba40) Stream added, broadcasting: 1\nI0510 21:49:17.882451 1923 log.go:172] (0xc0000f73f0) Reply frame received for 1\nI0510 21:49:17.882494 1923 log.go:172] (0xc0000f73f0) (0xc0009b8000) Create stream\nI0510 21:49:17.882515 1923 log.go:172] (0xc0000f73f0) (0xc0009b8000) Stream added, broadcasting: 3\nI0510 21:49:17.883422 1923 log.go:172] (0xc0000f73f0) Reply frame received for 3\nI0510 21:49:17.883465 1923 log.go:172] (0xc0000f73f0) (0xc000b00000) Create stream\nI0510 21:49:17.883478 1923 log.go:172] (0xc0000f73f0) (0xc000b00000) Stream added, broadcasting: 5\nI0510 21:49:17.884417 1923 log.go:172] (0xc0000f73f0) Reply frame received for 5\nI0510 21:49:17.947346 1923 log.go:172] (0xc0000f73f0) Data frame received for 3\nI0510 21:49:17.947387 1923 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0510 21:49:17.947412 1923 log.go:172] (0xc0009b8000) (3) Data frame sent\nI0510 21:49:17.947432 1923 log.go:172] (0xc0000f73f0) Data frame received for 3\nI0510 21:49:17.947451 1923 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0510 21:49:17.947963 1923 log.go:172] (0xc0000f73f0) Data frame received for 5\nI0510 21:49:17.947990 1923 log.go:172] (0xc000b00000) (5) Data frame handling\nI0510 21:49:17.948009 1923 log.go:172] (0xc000b00000) (5) Data frame sent\nI0510 21:49:17.948024 1923 log.go:172] (0xc0000f73f0) Data frame received for 5\nI0510 21:49:17.948050 1923 log.go:172] (0xc000b00000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 21:49:17.949906 1923 log.go:172] (0xc0000f73f0) Data frame received for 1\nI0510 21:49:17.949930 1923 log.go:172] (0xc0006dba40) (1) Data frame handling\nI0510 21:49:17.949956 1923 log.go:172] (0xc0006dba40) (1) Data frame sent\nI0510 21:49:17.949974 1923 log.go:172] (0xc0000f73f0) (0xc0006dba40) Stream removed, broadcasting: 1\nI0510 21:49:17.949998 1923 log.go:172] (0xc0000f73f0) Go away received\nI0510 21:49:17.950386 1923 log.go:172] (0xc0000f73f0) (0xc0006dba40) Stream removed, broadcasting: 1\nI0510 21:49:17.950408 1923 log.go:172] (0xc0000f73f0) (0xc0009b8000) Stream removed, broadcasting: 3\nI0510 21:49:17.950423 1923 log.go:172] (0xc0000f73f0) (0xc000b00000) Stream removed, broadcasting: 5\n" May 10 21:49:17.955: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:49:17.955: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:49:17.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:49:18.162: INFO: stderr: "I0510 21:49:18.087864 1942 log.go:172] (0xc000105760) (0xc00096a000) Create stream\nI0510 21:49:18.087957 1942 log.go:172] (0xc000105760) (0xc00096a000) Stream added, broadcasting: 1\nI0510 21:49:18.091111 1942 log.go:172] (0xc000105760) Reply frame received for 1\nI0510 21:49:18.091167 1942 log.go:172] (0xc000105760) (0xc00096a0a0) Create stream\nI0510 21:49:18.091195 1942 log.go:172] (0xc000105760) (0xc00096a0a0) Stream added, broadcasting: 3\nI0510 21:49:18.092036 1942 log.go:172] (0xc000105760) Reply frame received for 3\nI0510 21:49:18.092073 1942 log.go:172] (0xc000105760) (0xc00096a140) Create stream\nI0510 21:49:18.092089 1942 log.go:172] (0xc000105760) (0xc00096a140) Stream added, broadcasting: 5\nI0510 21:49:18.092958 1942 log.go:172] (0xc000105760) Reply frame received for 5\nI0510 21:49:18.156060 1942 log.go:172] (0xc000105760) Data frame received for 5\nI0510 21:49:18.156085 1942 log.go:172] (0xc00096a140) (5) Data frame handling\nI0510 21:49:18.156093 1942 log.go:172] (0xc00096a140) (5) Data frame sent\nI0510 21:49:18.156098 1942 log.go:172] (0xc000105760) Data frame received for 5\nI0510 21:49:18.156103 1942 log.go:172] (0xc00096a140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 21:49:18.156131 1942 log.go:172] (0xc000105760) Data frame received for 3\nI0510 21:49:18.156140 1942 log.go:172] (0xc00096a0a0) (3) Data frame handling\nI0510 21:49:18.156149 1942 log.go:172] (0xc00096a0a0) (3) Data frame sent\nI0510 21:49:18.156168 1942 log.go:172] (0xc000105760) Data frame received for 3\nI0510 21:49:18.156185 1942 log.go:172] (0xc00096a0a0) (3) Data frame handling\nI0510 21:49:18.157788 1942 log.go:172] (0xc000105760) Data frame received for 1\nI0510 21:49:18.157802 1942 log.go:172] (0xc00096a000) (1) Data frame handling\nI0510 21:49:18.157814 1942 log.go:172] (0xc00096a000) (1) Data frame sent\nI0510 21:49:18.157945 1942 log.go:172] (0xc000105760) (0xc00096a000) Stream removed, broadcasting: 1\nI0510 21:49:18.157987 1942 log.go:172] (0xc000105760) Go away received\nI0510 21:49:18.158336 1942 log.go:172] (0xc000105760) (0xc00096a000) Stream removed, broadcasting: 1\nI0510 21:49:18.158357 1942 log.go:172] (0xc000105760) (0xc00096a0a0) Stream removed, broadcasting: 3\nI0510 21:49:18.158368 1942 log.go:172] (0xc000105760) (0xc00096a140) Stream removed, broadcasting: 5\n" May 10 21:49:18.163: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:49:18.163: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:49:18.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2883 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 10 21:49:18.470: INFO: stderr: "I0510 21:49:18.396453 1966 log.go:172] (0xc0000f6e70) (0xc000852000) Create stream\nI0510 21:49:18.396502 1966 log.go:172] (0xc0000f6e70) (0xc000852000) Stream added, broadcasting: 1\nI0510 21:49:18.398818 1966 log.go:172] (0xc0000f6e70) Reply frame received for 1\nI0510 21:49:18.398853 1966 log.go:172] (0xc0000f6e70) (0xc000970000) Create stream\nI0510 21:49:18.398866 1966 log.go:172] (0xc0000f6e70) (0xc000970000) Stream added, broadcasting: 3\nI0510 21:49:18.399699 1966 log.go:172] (0xc0000f6e70) Reply frame received for 3\nI0510 21:49:18.399735 1966 log.go:172] (0xc0000f6e70) (0xc000852140) Create stream\nI0510 21:49:18.399751 1966 log.go:172] (0xc0000f6e70) (0xc000852140) Stream added, broadcasting: 5\nI0510 21:49:18.400499 1966 log.go:172] (0xc0000f6e70) Reply frame received for 5\nI0510 21:49:18.464526 1966 log.go:172] (0xc0000f6e70) Data frame received for 5\nI0510 21:49:18.464566 1966 log.go:172] (0xc000852140) (5) Data frame handling\nI0510 21:49:18.464578 1966 log.go:172] (0xc000852140) (5) Data frame sent\nI0510 21:49:18.464588 1966 log.go:172] (0xc0000f6e70) Data frame received for 5\nI0510 21:49:18.464602 1966 log.go:172] (0xc000852140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0510 21:49:18.464631 1966 log.go:172] (0xc0000f6e70) Data frame received for 3\nI0510 21:49:18.464644 1966 log.go:172] (0xc000970000) (3) Data frame handling\nI0510 21:49:18.464661 1966 log.go:172] (0xc000970000) (3) Data frame sent\nI0510 21:49:18.464669 1966 log.go:172] (0xc0000f6e70) Data frame received for 3\nI0510 21:49:18.464677 1966 log.go:172] (0xc000970000) (3) Data frame handling\nI0510 21:49:18.465922 1966 log.go:172] (0xc0000f6e70) Data frame received for 1\nI0510 21:49:18.465958 1966 log.go:172] (0xc000852000) (1) Data frame handling\nI0510 21:49:18.465975 1966 log.go:172] (0xc000852000) (1) Data frame sent\nI0510 21:49:18.466002 1966 log.go:172] (0xc0000f6e70) (0xc000852000) Stream removed, broadcasting: 1\nI0510 21:49:18.466133 1966 log.go:172] (0xc0000f6e70) Go away received\nI0510 21:49:18.466238 1966 log.go:172] (0xc0000f6e70) (0xc000852000) Stream removed, broadcasting: 1\nI0510 21:49:18.466252 1966 log.go:172] (0xc0000f6e70) (0xc000970000) Stream removed, broadcasting: 3\nI0510 21:49:18.466260 1966 log.go:172] (0xc0000f6e70) (0xc000852140) Stream removed, broadcasting: 5\n" May 10 21:49:18.470: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 10 21:49:18.470: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 10 21:49:18.470: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 10 21:49:58.535: INFO: Deleting all statefulset in ns statefulset-2883 May 10 21:49:58.539: INFO: Scaling statefulset ss to 0 May 10 21:49:58.595: INFO: Waiting for statefulset status.replicas updated to 0 May 10 21:49:58.598: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:49:58.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2883" for this suite. • [SLOW TEST:103.208 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":133,"skipped":2245,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:49:58.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:49:59.069: INFO: Create a RollingUpdate DaemonSet May 10 21:49:59.072: INFO: Check that daemon pods launch on every node of the cluster May 10 21:49:59.074: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:49:59.079: INFO: Number of nodes with available pods: 0 May 10 21:49:59.079: INFO: Node jerma-worker is running more than one daemon pod May 10 21:50:00.085: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:00.089: INFO: Number of nodes with available pods: 0 May 10 21:50:00.089: INFO: Node jerma-worker is running more than one daemon pod May 10 21:50:01.085: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:01.089: INFO: Number of nodes with available pods: 0 May 10 21:50:01.089: INFO: Node jerma-worker is running more than one daemon pod May 10 21:50:02.084: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:02.090: INFO: Number of nodes with available pods: 0 May 10 21:50:02.090: INFO: Node jerma-worker is running more than one daemon pod May 10 21:50:03.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:03.317: INFO: Number of nodes with available pods: 0 May 10 21:50:03.317: INFO: Node jerma-worker is running more than one daemon pod May 10 21:50:04.121: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:04.124: INFO: Number of nodes with available pods: 1 May 10 21:50:04.124: INFO: Node jerma-worker2 is running more than one daemon pod May 10 21:50:05.194: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:05.204: INFO: Number of nodes with available pods: 2 May 10 21:50:05.204: INFO: Number of running nodes: 2, number of available pods: 2 May 10 21:50:05.204: INFO: Update the DaemonSet to trigger a rollout May 10 21:50:05.215: INFO: Updating DaemonSet daemon-set May 10 21:50:20.233: INFO: Roll back the DaemonSet before rollout is complete May 10 21:50:20.236: INFO: Updating DaemonSet daemon-set May 10 21:50:20.236: INFO: Make sure DaemonSet rollback is complete May 10 21:50:20.254: INFO: Wrong image for pod: daemon-set-c5gvh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 10 21:50:20.254: INFO: Pod daemon-set-c5gvh is not available May 10 21:50:20.280: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:21.284: INFO: Wrong image for pod: daemon-set-c5gvh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 10 21:50:21.284: INFO: Pod daemon-set-c5gvh is not available May 10 21:50:21.289: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:22.284: INFO: Wrong image for pod: daemon-set-c5gvh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 10 21:50:22.284: INFO: Pod daemon-set-c5gvh is not available May 10 21:50:22.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 10 21:50:23.285: INFO: Pod daemon-set-wvdhj is not available May 10 21:50:23.290: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6978, will wait for the garbage collector to delete the pods May 10 21:50:23.354: INFO: Deleting DaemonSet.extensions daemon-set took: 5.752852ms May 10 21:50:23.454: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.30867ms May 10 21:50:29.357: INFO: Number of nodes with available pods: 0 May 10 21:50:29.357: INFO: Number of running nodes: 0, number of available pods: 0 May 10 21:50:29.359: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6978/daemonsets","resourceVersion":"15070360"},"items":null} May 10 21:50:29.361: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6978/pods","resourceVersion":"15070360"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:50:29.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6978" for this suite. • [SLOW TEST:30.479 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":134,"skipped":2246,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:50:29.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 10 21:50:33.484: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:50:33.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6649" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2247,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:50:33.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:50:33.632: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-cb426df0-678c-4d81-9e2c-0bb1a8161d64" in namespace "security-context-test-798" to be "success or failure" May 10 21:50:33.641: INFO: Pod "busybox-readonly-false-cb426df0-678c-4d81-9e2c-0bb1a8161d64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.86022ms May 10 21:50:35.645: INFO: Pod "busybox-readonly-false-cb426df0-678c-4d81-9e2c-0bb1a8161d64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013039029s May 10 21:50:37.653: INFO: Pod "busybox-readonly-false-cb426df0-678c-4d81-9e2c-0bb1a8161d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021287885s May 10 21:50:37.653: INFO: Pod "busybox-readonly-false-cb426df0-678c-4d81-9e2c-0bb1a8161d64" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:50:37.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-798" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2255,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:50:37.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 10 21:50:37.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070446 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 10 21:50:37.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070446 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 10 21:50:47.841: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070495 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 10 21:50:47.842: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070495 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 10 21:50:57.849: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070525 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 10 21:50:57.850: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070525 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 10 21:51:07.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070555 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 10 21:51:07.858: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-a d4999f5f-1dac-4e70-9912-239127966aa9 15070555 0 2020-05-10 21:50:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 10 21:51:17.865: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-b 48328a81-9a9a-4cb1-9d96-7244acf474c3 15070585 0 2020-05-10 21:51:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 10 21:51:17.865: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-b 48328a81-9a9a-4cb1-9d96-7244acf474c3 15070585 0 2020-05-10 21:51:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 10 21:51:27.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-b 48328a81-9a9a-4cb1-9d96-7244acf474c3 15070613 0 2020-05-10 21:51:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 10 21:51:27.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2273 /api/v1/namespaces/watch-2273/configmaps/e2e-watch-test-configmap-b 48328a81-9a9a-4cb1-9d96-7244acf474c3 15070613 0 2020-05-10 21:51:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:51:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2273" for this suite. • [SLOW TEST:60.221 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":137,"skipped":2267,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:51:37.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1753.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1753.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1753.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1753.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:51:46.037: INFO: DNS probes using dns-1753/dns-test-af9c84c1-eb4a-408e-bfea-fa2f20fb2e9e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:51:46.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1753" for this suite. • [SLOW TEST:8.395 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":138,"skipped":2289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:51:46.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:51:46.705: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 10 21:51:49.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 create -f -' May 10 21:51:53.480: INFO: stderr: "" May 10 21:51:53.480: INFO: stdout: "e2e-test-crd-publish-openapi-1164-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 10 21:51:53.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 delete e2e-test-crd-publish-openapi-1164-crds test-foo' May 10 21:51:53.588: INFO: stderr: "" May 10 21:51:53.588: INFO: stdout: "e2e-test-crd-publish-openapi-1164-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 10 21:51:53.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 apply -f -' May 10 21:51:53.882: INFO: stderr: "" May 10 21:51:53.882: INFO: stdout: "e2e-test-crd-publish-openapi-1164-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 10 21:51:53.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 delete e2e-test-crd-publish-openapi-1164-crds test-foo' May 10 21:51:54.008: INFO: stderr: "" May 10 21:51:54.008: INFO: stdout: "e2e-test-crd-publish-openapi-1164-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 10 21:51:54.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 create -f -' May 10 21:51:54.261: INFO: rc: 1 May 10 21:51:54.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 apply -f -' May 10 21:51:54.503: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 10 21:51:54.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 create -f -' May 10 21:51:54.736: INFO: rc: 1 May 10 21:51:54.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8402 apply -f -' May 10 21:51:55.016: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 10 21:51:55.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1164-crds' May 10 21:51:55.292: INFO: stderr: "" May 10 21:51:55.292: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1164-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 10 21:51:55.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1164-crds.metadata' May 10 21:51:55.576: INFO: stderr: "" May 10 21:51:55.576: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1164-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 10 21:51:55.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1164-crds.spec' May 10 21:51:55.821: INFO: stderr: "" May 10 21:51:55.821: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1164-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 10 21:51:55.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1164-crds.spec.bars' May 10 21:51:56.105: INFO: stderr: "" May 10 21:51:56.105: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1164-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 10 21:51:56.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1164-crds.spec.bars2' May 10 21:51:56.351: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:51:59.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8402" for this suite. • [SLOW TEST:12.941 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":139,"skipped":2313,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:51:59.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:52:00.140: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:52:02.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744320, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744320, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744320, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744320, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:52:05.197: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:52:17.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2317" for this suite. STEP: Destroying namespace "webhook-2317-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.372 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":140,"skipped":2317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:52:17.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-7aaa38ba-eb46-4f91-a1ae-517cf77945a4 STEP: Creating a pod to test consume secrets May 10 21:52:17.646: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770" in namespace "projected-260" to be "success or failure" May 10 21:52:17.650: INFO: Pod "pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259721ms May 10 21:52:19.697: INFO: Pod "pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051540855s May 10 21:52:21.702: INFO: Pod "pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770": Phase="Running", Reason="", readiness=true. Elapsed: 4.055891762s May 10 21:52:23.706: INFO: Pod "pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060361285s STEP: Saw pod success May 10 21:52:23.706: INFO: Pod "pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770" satisfied condition "success or failure" May 10 21:52:23.708: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770 container projected-secret-volume-test: STEP: delete the pod May 10 21:52:23.743: INFO: Waiting for pod pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770 to disappear May 10 21:52:23.748: INFO: Pod pod-projected-secrets-c9bb892a-f88b-444a-969c-2702b1085770 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:52:23.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-260" for this suite. • [SLOW TEST:6.163 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2366,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:52:23.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-60220aaf-c720-4f13-8403-1983ae81cf98 STEP: Creating a pod to test consume configMaps May 10 21:52:23.879: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8" in namespace "projected-4840" to be "success or failure" May 10 21:52:23.892: INFO: Pod "pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.875179ms May 10 21:52:25.896: INFO: Pod "pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016704124s May 10 21:52:27.900: INFO: Pod "pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0202776s STEP: Saw pod success May 10 21:52:27.900: INFO: Pod "pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8" satisfied condition "success or failure" May 10 21:52:27.903: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8 container projected-configmap-volume-test: STEP: delete the pod May 10 21:52:28.208: INFO: Waiting for pod pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8 to disappear May 10 21:52:28.228: INFO: Pod pod-projected-configmaps-9de21492-7714-4bb9-8d8a-acc553e475b8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:52:28.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4840" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:52:28.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:52:39.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8886" for this suite. • [SLOW TEST:11.132 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":143,"skipped":2409,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:52:39.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:52:39.469: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:52:43.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4698" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2431,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:52:43.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 10 21:52:43.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2847' May 10 21:52:43.720: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 10 21:52:43.720: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 10 21:52:43.735: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 10 21:52:43.749: INFO: scanned /root for discovery docs: May 10 21:52:43.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2847' May 10 21:52:59.837: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 10 21:52:59.837: INFO: stdout: "Created e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4\nScaling up e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 10 21:52:59.837: INFO: stdout: "Created e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4\nScaling up e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 10 21:52:59.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2847' May 10 21:53:00.117: INFO: stderr: "" May 10 21:53:00.117: INFO: stdout: "e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4-r7r9q " May 10 21:53:00.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4-r7r9q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2847' May 10 21:53:00.212: INFO: stderr: "" May 10 21:53:00.212: INFO: stdout: "true" May 10 21:53:00.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4-r7r9q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2847' May 10 21:53:00.363: INFO: stderr: "" May 10 21:53:00.363: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 10 21:53:00.363: INFO: e2e-test-httpd-rc-eda2b0d84653847a4963d71934383cc4-r7r9q is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 10 21:53:00.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2847' May 10 21:53:00.854: INFO: stderr: "" May 10 21:53:00.854: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:00.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2847" for this suite. • [SLOW TEST:17.332 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":145,"skipped":2441,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:00.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:05.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8177" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:05.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 10 21:53:05.225: INFO: namespace kubectl-4407 May 10 21:53:05.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4407' May 10 21:53:05.598: INFO: stderr: "" May 10 21:53:05.598: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 10 21:53:06.602: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:53:06.602: INFO: Found 0 / 1 May 10 21:53:07.602: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:53:07.602: INFO: Found 0 / 1 May 10 21:53:08.602: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:53:08.602: INFO: Found 0 / 1 May 10 21:53:09.602: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:53:09.602: INFO: Found 0 / 1 May 10 21:53:10.674: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:53:10.674: INFO: Found 1 / 1 May 10 21:53:10.674: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 10 21:53:10.679: INFO: Selector matched 1 pods for map[app:agnhost] May 10 21:53:10.679: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 10 21:53:10.679: INFO: wait on agnhost-master startup in kubectl-4407 May 10 21:53:10.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-tbjd5 agnhost-master --namespace=kubectl-4407' May 10 21:53:10.881: INFO: stderr: "" May 10 21:53:10.881: INFO: stdout: "Paused\n" STEP: exposing RC May 10 21:53:10.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4407' May 10 21:53:11.204: INFO: stderr: "" May 10 21:53:11.204: INFO: stdout: "service/rm2 exposed\n" May 10 21:53:11.233: INFO: Service rm2 in namespace kubectl-4407 found. STEP: exposing service May 10 21:53:13.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4407' May 10 21:53:13.385: INFO: stderr: "" May 10 21:53:13.385: INFO: stdout: "service/rm3 exposed\n" May 10 21:53:13.391: INFO: Service rm3 in namespace kubectl-4407 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:15.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4407" for this suite. • [SLOW TEST:10.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":147,"skipped":2468,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:15.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 10 21:53:16.006: INFO: created pod pod-service-account-defaultsa May 10 21:53:16.006: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 10 21:53:16.039: INFO: created pod pod-service-account-mountsa May 10 21:53:16.039: INFO: pod pod-service-account-mountsa service account token volume mount: true May 10 21:53:16.056: INFO: created pod pod-service-account-nomountsa May 10 21:53:16.056: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 10 21:53:16.135: INFO: created pod pod-service-account-defaultsa-mountspec May 10 21:53:16.135: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 10 21:53:16.140: INFO: created pod pod-service-account-mountsa-mountspec May 10 21:53:16.140: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 10 21:53:16.177: INFO: created pod pod-service-account-nomountsa-mountspec May 10 21:53:16.177: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 10 21:53:16.219: INFO: created pod pod-service-account-defaultsa-nomountspec May 10 21:53:16.219: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 10 21:53:16.229: INFO: created pod pod-service-account-mountsa-nomountspec May 10 21:53:16.229: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 10 21:53:16.285: INFO: created pod pod-service-account-nomountsa-nomountspec May 10 21:53:16.285: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:16.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2540" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":148,"skipped":2488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:16.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 10 21:53:16.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6591' May 10 21:53:16.636: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 10 21:53:16.636: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 10 21:53:18.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6591' May 10 21:53:18.862: INFO: stderr: "" May 10 21:53:18.862: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:18.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6591" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":149,"skipped":2558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:19.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:53:24.932: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:53:28.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744406, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 21:53:30.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744406, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 21:53:32.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744406, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744404, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:53:35.864: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:36.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7120" for this suite. STEP: Destroying namespace "webhook-7120-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.140 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":150,"skipped":2581,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:36.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:53:37.305: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 10 21:53:39.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 create -f -' May 10 21:53:45.480: INFO: stderr: "" May 10 21:53:45.480: INFO: stdout: "e2e-test-crd-publish-openapi-6158-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 10 21:53:45.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 delete e2e-test-crd-publish-openapi-6158-crds test-cr' May 10 21:53:45.586: INFO: stderr: "" May 10 21:53:45.586: INFO: stdout: "e2e-test-crd-publish-openapi-6158-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 10 21:53:45.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 apply -f -' May 10 21:53:45.884: INFO: stderr: "" May 10 21:53:45.884: INFO: stdout: "e2e-test-crd-publish-openapi-6158-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 10 21:53:45.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7328 delete e2e-test-crd-publish-openapi-6158-crds test-cr' May 10 21:53:45.998: INFO: stderr: "" May 10 21:53:45.998: INFO: stdout: "e2e-test-crd-publish-openapi-6158-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 10 21:53:45.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6158-crds' May 10 21:53:46.277: INFO: stderr: "" May 10 21:53:46.277: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6158-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:48.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7328" for this suite. • [SLOW TEST:11.400 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":151,"skipped":2583,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:48.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:53:48.763: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:53:50.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 21:53:52.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744428, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:53:55.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:53:56.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3080" for this suite. STEP: Destroying namespace "webhook-3080-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.186 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":152,"skipped":2589,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:53:56.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 10 21:54:01.012: INFO: Successfully updated pod "labelsupdate5a066bea-19cb-4ee5-b4c0-d4cd9fbc7177" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:05.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6168" for this suite. • [SLOW TEST:8.679 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2602,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:05.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:17.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9050" for this suite. • [SLOW TEST:12.167 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":154,"skipped":2606,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:17.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:54:17.311: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f" in namespace "downward-api-9107" to be "success or failure" May 10 21:54:17.357: INFO: Pod "downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.149922ms May 10 21:54:19.362: INFO: Pod "downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050467378s May 10 21:54:21.366: INFO: Pod "downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054769578s STEP: Saw pod success May 10 21:54:21.366: INFO: Pod "downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f" satisfied condition "success or failure" May 10 21:54:21.369: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f container client-container: STEP: delete the pod May 10 21:54:21.444: INFO: Waiting for pod downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f to disappear May 10 21:54:21.458: INFO: Pod downwardapi-volume-9638e9a0-7d09-4d2b-8011-7a65b6ffc30f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:21.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9107" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:21.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:29.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7072" for this suite. • [SLOW TEST:8.141 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2640,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:29.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 10 21:54:29.679: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 10 21:54:29.702: INFO: Waiting for terminating namespaces to be deleted... May 10 21:54:29.706: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 10 21:54:29.714: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:54:29.714: INFO: Container kindnet-cni ready: true, restart count 0 May 10 21:54:29.714: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:54:29.714: INFO: Container kube-proxy ready: true, restart count 0 May 10 21:54:29.714: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 10 21:54:29.718: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 10 21:54:29.719: INFO: Container kube-hunter ready: false, restart count 0 May 10 21:54:29.719: INFO: bin-false956b25ca-2f82-40ae-8909-ac92f298a3bc from kubelet-test-7072 started at 2020-05-10 21:54:21 +0000 UTC (1 container statuses recorded) May 10 21:54:29.719: INFO: Container bin-false956b25ca-2f82-40ae-8909-ac92f298a3bc ready: false, restart count 0 May 10 21:54:29.719: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:54:29.719: INFO: Container kindnet-cni ready: true, restart count 0 May 10 21:54:29.719: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 10 21:54:29.719: INFO: Container kube-bench ready: false, restart count 0 May 10 21:54:29.719: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 21:54:29.719: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 10 21:54:29.823: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 10 21:54:29.823: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 10 21:54:29.823: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 10 21:54:29.823: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 10 21:54:29.823: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 10 21:54:29.833: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-069fc76d-5753-4a92-9720-273f16c3694e.160dc962d4595cea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2177/filler-pod-069fc76d-5753-4a92-9720-273f16c3694e to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-069fc76d-5753-4a92-9720-273f16c3694e.160dc9631d1921db], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-069fc76d-5753-4a92-9720-273f16c3694e.160dc96367ca84f3], Reason = [Created], Message = [Created container filler-pod-069fc76d-5753-4a92-9720-273f16c3694e] STEP: Considering event: Type = [Normal], Name = [filler-pod-069fc76d-5753-4a92-9720-273f16c3694e.160dc9637b5457ff], Reason = [Started], Message = [Started container filler-pod-069fc76d-5753-4a92-9720-273f16c3694e] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6111be2-e3ea-4bfc-8482-497df1a73f53.160dc962d59f0192], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2177/filler-pod-e6111be2-e3ea-4bfc-8482-497df1a73f53 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6111be2-e3ea-4bfc-8482-497df1a73f53.160dc9635870db80], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6111be2-e3ea-4bfc-8482-497df1a73f53.160dc9638e12b390], Reason = [Created], Message = [Created container filler-pod-e6111be2-e3ea-4bfc-8482-497df1a73f53] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6111be2-e3ea-4bfc-8482-497df1a73f53.160dc9639fce8653], Reason = [Started], Message = [Started container filler-pod-e6111be2-e3ea-4bfc-8482-497df1a73f53] STEP: Considering event: Type = [Warning], Name = [additional-pod.160dc963c653aecb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:34.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2177" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.364 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":157,"skipped":2640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:34.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 10 21:54:35.891: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 10 21:54:37.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 21:54:39.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744475, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:54:42.969: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:54:42.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:44.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9393" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.432 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":158,"skipped":2665,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:44.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-9fac0d03-396d-4be1-b7a9-aec1d159d97d STEP: Creating a pod to test consume secrets May 10 21:54:44.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac" in namespace "projected-7957" to be "success or failure" May 10 21:54:44.491: INFO: Pod "pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac": Phase="Pending", Reason="", readiness=false. Elapsed: 20.13278ms May 10 21:54:46.495: INFO: Pod "pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02476875s May 10 21:54:48.499: INFO: Pod "pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028225809s STEP: Saw pod success May 10 21:54:48.499: INFO: Pod "pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac" satisfied condition "success or failure" May 10 21:54:48.502: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac container projected-secret-volume-test: STEP: delete the pod May 10 21:54:48.538: INFO: Waiting for pod pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac to disappear May 10 21:54:48.549: INFO: Pod pod-projected-secrets-8a7e7384-d077-4db0-9ffe-9b4895820aac no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:48.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7957" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2666,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:48.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 10 21:54:48.635: INFO: Waiting up to 5m0s for pod "downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32" in namespace "downward-api-9301" to be "success or failure" May 10 21:54:48.644: INFO: Pod "downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32": Phase="Pending", Reason="", readiness=false. Elapsed: 9.432777ms May 10 21:54:50.699: INFO: Pod "downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064124342s May 10 21:54:52.703: INFO: Pod "downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068174708s STEP: Saw pod success May 10 21:54:52.703: INFO: Pod "downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32" satisfied condition "success or failure" May 10 21:54:52.706: INFO: Trying to get logs from node jerma-worker2 pod downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32 container dapi-container: STEP: delete the pod May 10 21:54:52.742: INFO: Waiting for pod downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32 to disappear May 10 21:54:52.765: INFO: Pod downward-api-c013895d-e98c-4b3f-8ee6-a9a5f66d3c32 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:52.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9301" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2674,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:52.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6c310249-6f33-498d-a040-0d6afd9ff7f6 STEP: Creating a pod to test consume secrets May 10 21:54:52.869: INFO: Waiting up to 5m0s for pod "pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38" in namespace "secrets-619" to be "success or failure" May 10 21:54:52.872: INFO: Pod "pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38": Phase="Pending", Reason="", readiness=false. Elapsed: 3.412637ms May 10 21:54:54.876: INFO: Pod "pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007332403s May 10 21:54:56.881: INFO: Pod "pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012004102s STEP: Saw pod success May 10 21:54:56.881: INFO: Pod "pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38" satisfied condition "success or failure" May 10 21:54:56.884: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38 container secret-volume-test: STEP: delete the pod May 10 21:54:56.922: INFO: Waiting for pod pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38 to disappear May 10 21:54:56.927: INFO: Pod pod-secrets-c8ed972e-f42b-4026-bbcf-1edc5dfe3e38 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:56.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-619" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:56.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:54:57.013: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.440521ms) May 10 21:54:57.017: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.827094ms) May 10 21:54:57.021: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.31172ms) May 10 21:54:57.138: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 117.031529ms) May 10 21:54:57.142: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.614595ms) May 10 21:54:57.146: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.829638ms) May 10 21:54:57.151: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.281752ms) May 10 21:54:57.154: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.55425ms) May 10 21:54:57.158: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.552698ms) May 10 21:54:57.161: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.710829ms) May 10 21:54:57.165: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.742425ms) May 10 21:54:57.168: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.936722ms) May 10 21:54:57.171: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.960427ms) May 10 21:54:57.174: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.960175ms) May 10 21:54:57.177: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.072382ms) May 10 21:54:57.180: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.878878ms) May 10 21:54:57.183: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.179419ms) May 10 21:54:57.186: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.097137ms) May 10 21:54:57.190: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.191038ms) May 10 21:54:57.193: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.365389ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:54:57.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3156" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":162,"skipped":2705,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:54:57.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2799 STEP: creating a selector STEP: Creating the service pods in kubernetes May 10 21:54:57.294: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 10 21:55:23.554: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.36:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:55:23.554: INFO: >>> kubeConfig: /root/.kube/config I0510 21:55:23.588764 6 log.go:172] (0xc0017efe40) (0xc001f1db80) Create stream I0510 21:55:23.588795 6 log.go:172] (0xc0017efe40) (0xc001f1db80) Stream added, broadcasting: 1 I0510 21:55:23.591192 6 log.go:172] (0xc0017efe40) Reply frame received for 1 I0510 21:55:23.591237 6 log.go:172] (0xc0017efe40) (0xc0009cbae0) Create stream I0510 21:55:23.591250 6 log.go:172] (0xc0017efe40) (0xc0009cbae0) Stream added, broadcasting: 3 I0510 21:55:23.592347 6 log.go:172] (0xc0017efe40) Reply frame received for 3 I0510 21:55:23.592391 6 log.go:172] (0xc0017efe40) (0xc000256140) Create stream I0510 21:55:23.592409 6 log.go:172] (0xc0017efe40) (0xc000256140) Stream added, broadcasting: 5 I0510 21:55:23.593578 6 log.go:172] (0xc0017efe40) Reply frame received for 5 I0510 21:55:23.676495 6 log.go:172] (0xc0017efe40) Data frame received for 5 I0510 21:55:23.676555 6 log.go:172] (0xc000256140) (5) Data frame handling I0510 21:55:23.676603 6 log.go:172] (0xc0017efe40) Data frame received for 3 I0510 21:55:23.676634 6 log.go:172] (0xc0009cbae0) (3) Data frame handling I0510 21:55:23.676674 6 log.go:172] (0xc0009cbae0) (3) Data frame sent I0510 21:55:23.676701 6 log.go:172] (0xc0017efe40) Data frame received for 3 I0510 21:55:23.676725 6 log.go:172] (0xc0009cbae0) (3) Data frame handling I0510 21:55:23.678461 6 log.go:172] (0xc0017efe40) Data frame received for 1 I0510 21:55:23.678488 6 log.go:172] (0xc001f1db80) (1) Data frame handling I0510 21:55:23.678512 6 log.go:172] (0xc001f1db80) (1) Data frame sent I0510 21:55:23.678535 6 log.go:172] (0xc0017efe40) (0xc001f1db80) Stream removed, broadcasting: 1 I0510 21:55:23.678550 6 log.go:172] (0xc0017efe40) Go away received I0510 21:55:23.678619 6 log.go:172] (0xc0017efe40) (0xc001f1db80) Stream removed, broadcasting: 1 I0510 21:55:23.678640 6 log.go:172] (0xc0017efe40) (0xc0009cbae0) Stream removed, broadcasting: 3 I0510 21:55:23.678647 6 log.go:172] (0xc0017efe40) (0xc000256140) Stream removed, broadcasting: 5 May 10 21:55:23.678: INFO: Found all expected endpoints: [netserver-0] May 10 21:55:23.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.205:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 21:55:23.681: INFO: >>> kubeConfig: /root/.kube/config I0510 21:55:23.711063 6 log.go:172] (0xc00181c9a0) (0xc000aa0820) Create stream I0510 21:55:23.711098 6 log.go:172] (0xc00181c9a0) (0xc000aa0820) Stream added, broadcasting: 1 I0510 21:55:23.713789 6 log.go:172] (0xc00181c9a0) Reply frame received for 1 I0510 21:55:23.713848 6 log.go:172] (0xc00181c9a0) (0xc001ea9ae0) Create stream I0510 21:55:23.713871 6 log.go:172] (0xc00181c9a0) (0xc001ea9ae0) Stream added, broadcasting: 3 I0510 21:55:23.714917 6 log.go:172] (0xc00181c9a0) Reply frame received for 3 I0510 21:55:23.714956 6 log.go:172] (0xc00181c9a0) (0xc000256280) Create stream I0510 21:55:23.714969 6 log.go:172] (0xc00181c9a0) (0xc000256280) Stream added, broadcasting: 5 I0510 21:55:23.715792 6 log.go:172] (0xc00181c9a0) Reply frame received for 5 I0510 21:55:23.787150 6 log.go:172] (0xc00181c9a0) Data frame received for 5 I0510 21:55:23.787227 6 log.go:172] (0xc00181c9a0) Data frame received for 3 I0510 21:55:23.787282 6 log.go:172] (0xc001ea9ae0) (3) Data frame handling I0510 21:55:23.787312 6 log.go:172] (0xc001ea9ae0) (3) Data frame sent I0510 21:55:23.787336 6 log.go:172] (0xc000256280) (5) Data frame handling I0510 21:55:23.787412 6 log.go:172] (0xc00181c9a0) Data frame received for 3 I0510 21:55:23.787440 6 log.go:172] (0xc001ea9ae0) (3) Data frame handling I0510 21:55:23.789635 6 log.go:172] (0xc00181c9a0) Data frame received for 1 I0510 21:55:23.789667 6 log.go:172] (0xc000aa0820) (1) Data frame handling I0510 21:55:23.789684 6 log.go:172] (0xc000aa0820) (1) Data frame sent I0510 21:55:23.789708 6 log.go:172] (0xc00181c9a0) (0xc000aa0820) Stream removed, broadcasting: 1 I0510 21:55:23.789774 6 log.go:172] (0xc00181c9a0) (0xc000aa0820) Stream removed, broadcasting: 1 I0510 21:55:23.789786 6 log.go:172] (0xc00181c9a0) (0xc001ea9ae0) Stream removed, broadcasting: 3 I0510 21:55:23.789797 6 log.go:172] (0xc00181c9a0) (0xc000256280) Stream removed, broadcasting: 5 I0510 21:55:23.789815 6 log.go:172] (0xc00181c9a0) Go away received May 10 21:55:23.789: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:55:23.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2799" for this suite. • [SLOW TEST:26.597 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2711,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:55:23.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:55:23.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 10 21:55:24.479: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-10T21:55:24Z generation:1 name:name1 resourceVersion:15072335 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f49a1f18-0573-4f25-83e2-d93c6af38cee] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 10 21:55:34.491: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-10T21:55:34Z generation:1 name:name2 resourceVersion:15072393 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2d661cd1-0d77-4c6b-b513-94bfa0910746] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 10 21:55:44.498: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-10T21:55:24Z generation:2 name:name1 resourceVersion:15072422 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f49a1f18-0573-4f25-83e2-d93c6af38cee] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 10 21:55:54.504: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-10T21:55:34Z generation:2 name:name2 resourceVersion:15072451 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2d661cd1-0d77-4c6b-b513-94bfa0910746] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 10 21:56:04.513: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-10T21:55:24Z generation:2 name:name1 resourceVersion:15072479 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f49a1f18-0573-4f25-83e2-d93c6af38cee] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 10 21:56:14.521: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-10T21:55:34Z generation:2 name:name2 resourceVersion:15072509 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2d661cd1-0d77-4c6b-b513-94bfa0910746] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:56:25.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2504" for this suite. • [SLOW TEST:61.240 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":164,"skipped":2716,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:56:25.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:56:25.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7185" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":165,"skipped":2724,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:56:25.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-d005228c-3f24-4c74-887e-b93556cd7847 STEP: Creating configMap with name cm-test-opt-upd-85675ec3-87cd-4a27-86e1-fe0dfb98901f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d005228c-3f24-4c74-887e-b93556cd7847 STEP: Updating configmap cm-test-opt-upd-85675ec3-87cd-4a27-86e1-fe0dfb98901f STEP: Creating configMap with name cm-test-opt-create-43a58219-1344-4147-8689-03e267bba4e7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:56:35.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5677" for this suite. • [SLOW TEST:10.228 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2739,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:56:35.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 21:56:36.515: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 21:56:38.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 21:56:40.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724744596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 21:56:43.599: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:56:43.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6276" for this suite. STEP: Destroying namespace "webhook-6276-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.147 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":167,"skipped":2739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:56:43.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-8b11b23d-4173-46ec-91e3-7e0a1ffa761f STEP: Creating a pod to test consume configMaps May 10 21:56:43.826: INFO: Waiting up to 5m0s for pod "pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf" in namespace "configmap-3341" to be "success or failure" May 10 21:56:43.834: INFO: Pod "pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.727017ms May 10 21:56:45.976: INFO: Pod "pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150262138s May 10 21:56:47.981: INFO: Pod "pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15455111s STEP: Saw pod success May 10 21:56:47.981: INFO: Pod "pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf" satisfied condition "success or failure" May 10 21:56:47.984: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf container configmap-volume-test: STEP: delete the pod May 10 21:56:48.026: INFO: Waiting for pod pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf to disappear May 10 21:56:48.032: INFO: Pod pod-configmaps-dea87294-fab5-487e-b928-54810c5b7abf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:56:48.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3341" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2763,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:56:48.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 21:56:48.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0" in namespace "projected-8423" to be "success or failure" May 10 21:56:48.136: INFO: Pod "downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.229339ms May 10 21:56:50.198: INFO: Pod "downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076074077s May 10 21:56:52.202: INFO: Pod "downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079823461s STEP: Saw pod success May 10 21:56:52.202: INFO: Pod "downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0" satisfied condition "success or failure" May 10 21:56:52.206: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0 container client-container: STEP: delete the pod May 10 21:56:52.245: INFO: Waiting for pod downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0 to disappear May 10 21:56:52.281: INFO: Pod downwardapi-volume-6655fcb9-47c7-4ad4-9a32-03c5c70e39b0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:56:52.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8423" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2773,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:56:52.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 10 21:56:56.416: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 10 21:57:11.510: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:57:11.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6788" for this suite. • [SLOW TEST:19.231 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":170,"skipped":2777,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:57:11.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 21:57:11.572: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:57:12.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2746" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":171,"skipped":2791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:57:12.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:57:18.803: INFO: DNS probes using dns-test-521c2b32-31dd-4caf-8f12-db2f13fc90a6 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:57:24.899: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:24.903: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains '' instead of 'bar.example.com.' May 10 21:57:24.903: INFO: Lookups using dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] May 10 21:57:29.908: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:29.912: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:29.912: INFO: Lookups using dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] May 10 21:57:34.907: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:34.911: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:34.911: INFO: Lookups using dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] May 10 21:57:39.908: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:39.911: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:39.911: INFO: Lookups using dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] May 10 21:57:44.907: INFO: File wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:44.909: INFO: File jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local from pod dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 contains 'foo.example.com. ' instead of 'bar.example.com.' May 10 21:57:44.909: INFO: Lookups using dns-3070/dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 failed for: [wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local] May 10 21:57:49.911: INFO: DNS probes using dns-test-40a12123-391c-4fe5-8f67-b0fe7354cb55 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3070.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3070.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 21:57:56.783: INFO: DNS probes using dns-test-fa0dd67b-cc49-4149-96ac-927911b9eaca succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:57:56.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3070" for this suite. • [SLOW TEST:44.224 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":172,"skipped":2814,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:57:56.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 10 21:57:58.777: INFO: Pod name wrapped-volume-race-4994b422-745a-4556-b4bd-6fc19174e2c7: Found 0 pods out of 5 May 10 21:58:03.784: INFO: Pod name wrapped-volume-race-4994b422-745a-4556-b4bd-6fc19174e2c7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4994b422-745a-4556-b4bd-6fc19174e2c7 in namespace emptydir-wrapper-863, will wait for the garbage collector to delete the pods May 10 21:58:15.959: INFO: Deleting ReplicationController wrapped-volume-race-4994b422-745a-4556-b4bd-6fc19174e2c7 took: 92.565758ms May 10 21:58:16.359: INFO: Terminating ReplicationController wrapped-volume-race-4994b422-745a-4556-b4bd-6fc19174e2c7 pods took: 400.255987ms STEP: Creating RC which spawns configmap-volume pods May 10 21:58:23.112: INFO: Pod name wrapped-volume-race-bc06b5ec-9207-4043-98f6-a622f33cb79c: Found 1 pods out of 5 May 10 21:58:28.118: INFO: Pod name wrapped-volume-race-bc06b5ec-9207-4043-98f6-a622f33cb79c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bc06b5ec-9207-4043-98f6-a622f33cb79c in namespace emptydir-wrapper-863, will wait for the garbage collector to delete the pods May 10 21:58:42.206: INFO: Deleting ReplicationController wrapped-volume-race-bc06b5ec-9207-4043-98f6-a622f33cb79c took: 7.640519ms May 10 21:58:42.506: INFO: Terminating ReplicationController wrapped-volume-race-bc06b5ec-9207-4043-98f6-a622f33cb79c pods took: 300.275783ms STEP: Creating RC which spawns configmap-volume pods May 10 21:58:59.471: INFO: Pod name wrapped-volume-race-c4d43b99-5d3d-4d23-b810-bb28bd3d295a: Found 0 pods out of 5 May 10 21:59:04.486: INFO: Pod name wrapped-volume-race-c4d43b99-5d3d-4d23-b810-bb28bd3d295a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c4d43b99-5d3d-4d23-b810-bb28bd3d295a in namespace emptydir-wrapper-863, will wait for the garbage collector to delete the pods May 10 21:59:18.602: INFO: Deleting ReplicationController wrapped-volume-race-c4d43b99-5d3d-4d23-b810-bb28bd3d295a took: 12.073419ms May 10 21:59:18.902: INFO: Terminating ReplicationController wrapped-volume-race-c4d43b99-5d3d-4d23-b810-bb28bd3d295a pods took: 300.247399ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:59:30.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-863" for this suite. • [SLOW TEST:94.118 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":173,"skipped":2838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:59:31.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4593/configmap-test-a32f7ecc-7dfb-4d24-87e7-fb4b0f3dbebd STEP: Creating a pod to test consume configMaps May 10 21:59:31.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532" in namespace "configmap-4593" to be "success or failure" May 10 21:59:31.140: INFO: Pod "pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115502ms May 10 21:59:33.182: INFO: Pod "pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046551266s May 10 21:59:35.186: INFO: Pod "pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050591055s STEP: Saw pod success May 10 21:59:35.186: INFO: Pod "pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532" satisfied condition "success or failure" May 10 21:59:35.190: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532 container env-test: STEP: delete the pod May 10 21:59:35.250: INFO: Waiting for pod pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532 to disappear May 10 21:59:35.254: INFO: Pod pod-configmaps-e7b9d7d5-d0a7-46be-bc4b-235974f26532 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:59:35.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4593" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2864,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:59:35.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23 May 10 21:59:35.459: INFO: Pod name my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23: Found 0 pods out of 1 May 10 21:59:40.470: INFO: Pod name my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23: Found 1 pods out of 1 May 10 21:59:40.470: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23" are running May 10 21:59:40.472: INFO: Pod "my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23-2jj6q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:59:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:59:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:59:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-10 21:59:35 +0000 UTC Reason: Message:}]) May 10 21:59:40.472: INFO: Trying to dial the pod May 10 21:59:45.489: INFO: Controller my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23: Got expected result from replica 1 [my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23-2jj6q]: "my-hostname-basic-ac2d6a0d-0f69-4df2-a056-c9e1c8be5b23-2jj6q", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:59:45.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9329" for this suite. • [SLOW TEST:10.199 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":175,"skipped":2872,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:59:45.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:59:45.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1808" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":176,"skipped":2881,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:59:45.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-71187393-734c-4bcb-946f-2ae281e20232 STEP: Creating a pod to test consume secrets May 10 21:59:45.782: INFO: Waiting up to 5m0s for pod "pod-secrets-15b29b47-ec37-4296-a423-33518289052a" in namespace "secrets-6977" to be "success or failure" May 10 21:59:45.787: INFO: Pod "pod-secrets-15b29b47-ec37-4296-a423-33518289052a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.16144ms May 10 21:59:47.811: INFO: Pod "pod-secrets-15b29b47-ec37-4296-a423-33518289052a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029620157s May 10 21:59:49.816: INFO: Pod "pod-secrets-15b29b47-ec37-4296-a423-33518289052a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034080473s STEP: Saw pod success May 10 21:59:49.816: INFO: Pod "pod-secrets-15b29b47-ec37-4296-a423-33518289052a" satisfied condition "success or failure" May 10 21:59:49.819: INFO: Trying to get logs from node jerma-worker pod pod-secrets-15b29b47-ec37-4296-a423-33518289052a container secret-env-test: STEP: delete the pod May 10 21:59:49.896: INFO: Waiting for pod pod-secrets-15b29b47-ec37-4296-a423-33518289052a to disappear May 10 21:59:49.925: INFO: Pod pod-secrets-15b29b47-ec37-4296-a423-33518289052a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:59:49.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6977" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2896,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:59:49.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 21:59:57.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1837" for this suite. • [SLOW TEST:7.152 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":178,"skipped":2898,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 21:59:57.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 10 22:00:01.157: INFO: Pod pod-hostip-cb70c947-8276-4b7e-860d-b5e2ad3fe494 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:00:01.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7525" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2917,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:00:01.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-f4e9aa69-8ecb-4c66-9cf8-ec34ffc6b7eb STEP: Creating secret with name s-test-opt-upd-8a5c4d07-3e0e-4fdb-b5e2-fb3d72371239 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f4e9aa69-8ecb-4c66-9cf8-ec34ffc6b7eb STEP: Updating secret s-test-opt-upd-8a5c4d07-3e0e-4fdb-b5e2-fb3d72371239 STEP: Creating secret with name s-test-opt-create-4c2e058d-b9b8-44bc-b48b-a19edac1e876 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:00:09.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4168" for this suite. • [SLOW TEST:8.334 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2945,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:00:09.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 10 22:00:09.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 10 22:00:09.699: INFO: stderr: "" May 10 22:00:09.699: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:00:09.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4781" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":181,"skipped":2947,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:00:09.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:00:26.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4571" for this suite. • [SLOW TEST:16.307 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":182,"skipped":2953,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:00:26.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 10 22:00:26.156: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 10 22:00:26.167: INFO: Waiting for terminating namespaces to be deleted... May 10 22:00:26.170: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 10 22:00:26.175: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:00:26.175: INFO: Container kindnet-cni ready: true, restart count 0 May 10 22:00:26.175: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:00:26.175: INFO: Container kube-proxy ready: true, restart count 0 May 10 22:00:26.175: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 10 22:00:26.180: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 10 22:00:26.180: INFO: Container kube-hunter ready: false, restart count 0 May 10 22:00:26.180: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:00:26.180: INFO: Container kindnet-cni ready: true, restart count 0 May 10 22:00:26.180: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 10 22:00:26.180: INFO: Container kube-bench ready: false, restart count 0 May 10 22:00:26.180: INFO: pod-projected-secrets-04f3c5a8-94ab-4166-aa85-403060caa799 from projected-4168 started at 2020-05-10 22:00:01 +0000 UTC (3 container statuses recorded) May 10 22:00:26.180: INFO: Container creates-volume-test ready: false, restart count 0 May 10 22:00:26.180: INFO: Container dels-volume-test ready: false, restart count 0 May 10 22:00:26.180: INFO: Container upds-volume-test ready: false, restart count 0 May 10 22:00:26.180: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:00:26.180: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5220d525-1034-42e2-866b-d811a0d630fd 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5220d525-1034-42e2-866b-d811a0d630fd off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5220d525-1034-42e2-866b-d811a0d630fd [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:05:34.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4622" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.960 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":183,"skipped":2959,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:05:34.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 10 22:05:35.112: INFO: Waiting up to 5m0s for pod "pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8" in namespace "emptydir-310" to be "success or failure" May 10 22:05:35.146: INFO: Pod "pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.210773ms May 10 22:05:37.200: INFO: Pod "pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088056232s May 10 22:05:39.221: INFO: Pod "pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109207055s STEP: Saw pod success May 10 22:05:39.221: INFO: Pod "pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8" satisfied condition "success or failure" May 10 22:05:39.238: INFO: Trying to get logs from node jerma-worker2 pod pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8 container test-container: STEP: delete the pod May 10 22:05:39.303: INFO: Waiting for pod pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8 to disappear May 10 22:05:39.339: INFO: Pod pod-84d85421-cb7b-41d9-b73b-109f8e86b5c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:05:39.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-310" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:05:39.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:05:44.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4246" for this suite. • [SLOW TEST:5.521 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":185,"skipped":2989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:05:44.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-09c25e19-1dbd-497c-bb3b-a0917073705b STEP: Creating a pod to test consume secrets May 10 22:05:45.347: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f" in namespace "projected-8935" to be "success or failure" May 10 22:05:45.382: INFO: Pod "pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.056382ms May 10 22:05:47.386: INFO: Pod "pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038713648s May 10 22:05:49.394: INFO: Pod "pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046862502s STEP: Saw pod success May 10 22:05:49.394: INFO: Pod "pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f" satisfied condition "success or failure" May 10 22:05:49.397: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f container projected-secret-volume-test: STEP: delete the pod May 10 22:05:49.543: INFO: Waiting for pod pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f to disappear May 10 22:05:49.618: INFO: Pod pod-projected-secrets-e316d0b4-76cd-497e-a734-36ed79f9718f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:05:49.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8935" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3025,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:05:49.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 22:05:49.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a" in namespace "projected-1950" to be "success or failure" May 10 22:05:49.738: INFO: Pod "downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.058003ms May 10 22:05:51.743: INFO: Pod "downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025288186s May 10 22:05:53.747: INFO: Pod "downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029245107s May 10 22:05:55.751: INFO: Pod "downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033684481s STEP: Saw pod success May 10 22:05:55.751: INFO: Pod "downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a" satisfied condition "success or failure" May 10 22:05:55.754: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a container client-container: STEP: delete the pod May 10 22:05:55.806: INFO: Waiting for pod downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a to disappear May 10 22:05:55.811: INFO: Pod downwardapi-volume-9be3be34-3e17-403b-9484-69a23104340a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:05:55.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1950" for this suite. • [SLOW TEST:6.192 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3026,"failed":0} SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:05:55.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:05:55.885: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:00.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7702" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3029,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:00.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2f77d269-108e-4e9e-8099-8667e9aea80e STEP: Creating a pod to test consume configMaps May 10 22:06:00.139: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac" in namespace "projected-7072" to be "success or failure" May 10 22:06:00.143: INFO: Pod "pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.909764ms May 10 22:06:02.147: INFO: Pod "pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007897097s May 10 22:06:04.151: INFO: Pod "pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012132058s STEP: Saw pod success May 10 22:06:04.151: INFO: Pod "pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac" satisfied condition "success or failure" May 10 22:06:04.154: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac container projected-configmap-volume-test: STEP: delete the pod May 10 22:06:04.184: INFO: Waiting for pod pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac to disappear May 10 22:06:04.197: INFO: Pod pod-projected-configmaps-57cee619-a3bf-42a6-a96a-604c115babac no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:04.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7072" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:04.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 10 22:06:08.828: INFO: Successfully updated pod "labelsupdate83f92663-5c37-42db-a9f8-98cad5104496" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:10.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7120" for this suite. • [SLOW TEST:6.696 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3119,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:10.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 22:06:11.612: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 22:06:13.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745171, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 22:06:16.661: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 10 22:06:16.684: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:16.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2093" for this suite. STEP: Destroying namespace "webhook-2093-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.083 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":191,"skipped":3133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:16.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 10 22:06:17.042: INFO: Waiting up to 5m0s for pod "client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e" in namespace "containers-517" to be "success or failure" May 10 22:06:17.058: INFO: Pod "client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.699022ms May 10 22:06:19.062: INFO: Pod "client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019421503s May 10 22:06:21.065: INFO: Pod "client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e": Phase="Running", Reason="", readiness=true. Elapsed: 4.023059771s May 10 22:06:23.069: INFO: Pod "client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02700349s STEP: Saw pod success May 10 22:06:23.069: INFO: Pod "client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e" satisfied condition "success or failure" May 10 22:06:23.072: INFO: Trying to get logs from node jerma-worker pod client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e container test-container: STEP: delete the pod May 10 22:06:23.138: INFO: Waiting for pod client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e to disappear May 10 22:06:23.158: INFO: Pod client-containers-717ba0c9-64ef-497a-953c-f6af7d350f3e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-517" for this suite. • [SLOW TEST:6.177 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3184,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:23.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 10 22:06:31.243: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 10 22:06:31.290: INFO: Pod pod-with-prestop-exec-hook still exists May 10 22:06:33.290: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 10 22:06:33.294: INFO: Pod pod-with-prestop-exec-hook still exists May 10 22:06:35.290: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 10 22:06:35.294: INFO: Pod pod-with-prestop-exec-hook still exists May 10 22:06:37.290: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 10 22:06:37.294: INFO: Pod pod-with-prestop-exec-hook still exists May 10 22:06:39.290: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 10 22:06:39.294: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:39.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4296" for this suite. • [SLOW TEST:16.143 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:39.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:44.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-717" for this suite. • [SLOW TEST:5.232 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":194,"skipped":3216,"failed":0} SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:44.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:06:44.604: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 10 22:06:44.638: INFO: Pod name sample-pod: Found 0 pods out of 1 May 10 22:06:49.645: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 10 22:06:49.645: INFO: Creating deployment "test-rolling-update-deployment" May 10 22:06:49.683: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 10 22:06:49.702: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 10 22:06:51.910: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 10 22:06:51.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745209, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745209, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745209, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745209, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:06:53.915: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 10 22:06:53.926: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8423 /apis/apps/v1/namespaces/deployment-8423/deployments/test-rolling-update-deployment 5bb0e1e7-43bc-4d9a-9f13-3bbd95eeae74 15076258 1 2020-05-10 22:06:49 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c49498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-10 22:06:49 +0000 UTC,LastTransitionTime:2020-05-10 22:06:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-10 22:06:53 +0000 UTC,LastTransitionTime:2020-05-10 22:06:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 10 22:06:53.928: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-8423 /apis/apps/v1/namespaces/deployment-8423/replicasets/test-rolling-update-deployment-67cf4f6444 31b427cd-d6c4-49a9-bfff-b6e332b78be4 15076247 1 2020-05-10 22:06:49 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 5bb0e1e7-43bc-4d9a-9f13-3bbd95eeae74 0xc005c49937 0xc005c49938}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c499b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 10 22:06:53.928: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 10 22:06:53.928: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8423 /apis/apps/v1/namespaces/deployment-8423/replicasets/test-rolling-update-controller c7eb9b7d-65eb-4cc2-a914-78a1ff2c4f96 15076257 2 2020-05-10 22:06:44 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 5bb0e1e7-43bc-4d9a-9f13-3bbd95eeae74 0xc005c49867 0xc005c49868}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005c498c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 10 22:06:53.967: INFO: Pod "test-rolling-update-deployment-67cf4f6444-r6m77" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-r6m77 test-rolling-update-deployment-67cf4f6444- deployment-8423 /api/v1/namespaces/deployment-8423/pods/test-rolling-update-deployment-67cf4f6444-r6m77 4667079c-41c6-4b78-97b2-4f2667d2c49d 15076246 0 2020-05-10 22:06:49 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 31b427cd-d6c4-49a9-bfff-b6e332b78be4 0xc005c49e27 0xc005c49e28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-525qt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-525qt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-525qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.226,StartTime:2020-05-10 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:06:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://3b9fded7045d8adbe9b9d579d7d3de6c0e1863214a4e531da9b8c8a49db95854,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:53.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8423" for this suite. • [SLOW TEST:9.442 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":195,"skipped":3218,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:53.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 10 22:06:54.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 10 22:06:54.453: INFO: stderr: "" May 10 22:06:54.453: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:54.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3933" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":196,"skipped":3225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:54.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:06:54.538: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1569fad3-624e-46b2-ba7e-3774a6e86897" in namespace "security-context-test-4424" to be "success or failure" May 10 22:06:54.546: INFO: Pod "busybox-privileged-false-1569fad3-624e-46b2-ba7e-3774a6e86897": Phase="Pending", Reason="", readiness=false. Elapsed: 8.679683ms May 10 22:06:56.550: INFO: Pod "busybox-privileged-false-1569fad3-624e-46b2-ba7e-3774a6e86897": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012815481s May 10 22:06:58.555: INFO: Pod "busybox-privileged-false-1569fad3-624e-46b2-ba7e-3774a6e86897": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01719109s May 10 22:06:58.555: INFO: Pod "busybox-privileged-false-1569fad3-624e-46b2-ba7e-3774a6e86897" satisfied condition "success or failure" May 10 22:06:58.560: INFO: Got logs for pod "busybox-privileged-false-1569fad3-624e-46b2-ba7e-3774a6e86897": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:06:58.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4424" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3256,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:06:58.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7310 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7310 STEP: Creating statefulset with conflicting port in namespace statefulset-7310 STEP: Waiting until pod test-pod will start running in namespace statefulset-7310 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7310 May 10 22:07:04.771: INFO: Observed stateful pod in namespace: statefulset-7310, name: ss-0, uid: 8d10d9aa-0c08-45bc-8690-c87f59b62b43, status phase: Pending. Waiting for statefulset controller to delete. May 10 22:07:04.965: INFO: Observed stateful pod in namespace: statefulset-7310, name: ss-0, uid: 8d10d9aa-0c08-45bc-8690-c87f59b62b43, status phase: Failed. Waiting for statefulset controller to delete. May 10 22:07:04.990: INFO: Observed stateful pod in namespace: statefulset-7310, name: ss-0, uid: 8d10d9aa-0c08-45bc-8690-c87f59b62b43, status phase: Failed. Waiting for statefulset controller to delete. May 10 22:07:05.003: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7310 STEP: Removing pod with conflicting port in namespace statefulset-7310 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7310 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 10 22:07:11.061: INFO: Deleting all statefulset in ns statefulset-7310 May 10 22:07:11.064: INFO: Scaling statefulset ss to 0 May 10 22:07:21.087: INFO: Waiting for statefulset status.replicas updated to 0 May 10 22:07:21.090: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:07:21.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7310" for this suite. • [SLOW TEST:22.547 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":198,"skipped":3256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:07:21.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0510 22:07:31.275114 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 10 22:07:31.275: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:07:31.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2636" for this suite. • [SLOW TEST:10.168 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":199,"skipped":3280,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:07:31.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 22:07:31.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3" in namespace "downward-api-3382" to be "success or failure" May 10 22:07:31.517: INFO: Pod "downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23785ms May 10 22:07:33.548: INFO: Pod "downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034058008s May 10 22:07:35.552: INFO: Pod "downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037608095s STEP: Saw pod success May 10 22:07:35.552: INFO: Pod "downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3" satisfied condition "success or failure" May 10 22:07:35.555: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3 container client-container: STEP: delete the pod May 10 22:07:35.603: INFO: Waiting for pod downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3 to disappear May 10 22:07:35.614: INFO: Pod downwardapi-volume-84c40154-7968-4d21-849b-cb9c1dece4d3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:07:35.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3382" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3285,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:07:35.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 10 22:07:35.669: INFO: Waiting up to 5m0s for pod "pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090" in namespace "emptydir-7910" to be "success or failure" May 10 22:07:35.674: INFO: Pod "pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090": Phase="Pending", Reason="", readiness=false. Elapsed: 4.496281ms May 10 22:07:37.678: INFO: Pod "pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009126276s May 10 22:07:39.682: INFO: Pod "pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013276014s STEP: Saw pod success May 10 22:07:39.682: INFO: Pod "pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090" satisfied condition "success or failure" May 10 22:07:39.685: INFO: Trying to get logs from node jerma-worker pod pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090 container test-container: STEP: delete the pod May 10 22:07:39.714: INFO: Waiting for pod pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090 to disappear May 10 22:07:39.723: INFO: Pod pod-a8c509fd-b32f-4b7a-89f5-3e3954b86090 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:07:39.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7910" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3285,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:07:39.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 10 22:07:43.838: INFO: &Pod{ObjectMeta:{send-events-2671ebbf-f4c6-443a-ba3e-83ef35bc9312 events-2188 /api/v1/namespaces/events-2188/pods/send-events-2671ebbf-f4c6-443a-ba3e-83ef35bc9312 58bb872a-d07b-477f-b460-b9ac1d13fbf5 15076716 0 2020-05-10 22:07:39 +0000 UTC map[name:foo time:817949110] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lhbfl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lhbfl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lhbfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:07:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:07:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.232,StartTime:2020-05-10 22:07:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:07:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9b7a4fb960fc471476ba6d52803df2bec1e572a4943fdb90ccd35b74cd510e5a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 10 22:07:45.842: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 10 22:07:47.847: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:07:47.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2188" for this suite. • [SLOW TEST:8.185 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":202,"skipped":3296,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:07:47.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 10 22:07:52.513: INFO: Successfully updated pod "annotationupdateb63cfc0a-90c2-49f8-a39f-dd079fa26ed1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:07:56.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2953" for this suite. • [SLOW TEST:8.770 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3314,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:07:56.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 10 22:07:56.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3369' May 10 22:07:59.621: INFO: stderr: "" May 10 22:07:59.621: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 10 22:07:59.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3369' May 10 22:07:59.734: INFO: stderr: "" May 10 22:07:59.734: INFO: stdout: "update-demo-nautilus-9jb2f update-demo-nautilus-psrnd " May 10 22:07:59.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jb2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3369' May 10 22:07:59.855: INFO: stderr: "" May 10 22:07:59.855: INFO: stdout: "" May 10 22:07:59.855: INFO: update-demo-nautilus-9jb2f is created but not running May 10 22:08:04.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3369' May 10 22:08:04.951: INFO: stderr: "" May 10 22:08:04.951: INFO: stdout: "update-demo-nautilus-9jb2f update-demo-nautilus-psrnd " May 10 22:08:04.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jb2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3369' May 10 22:08:05.050: INFO: stderr: "" May 10 22:08:05.050: INFO: stdout: "true" May 10 22:08:05.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jb2f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3369' May 10 22:08:05.170: INFO: stderr: "" May 10 22:08:05.170: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:08:05.170: INFO: validating pod update-demo-nautilus-9jb2f May 10 22:08:05.174: INFO: got data: { "image": "nautilus.jpg" } May 10 22:08:05.174: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:08:05.174: INFO: update-demo-nautilus-9jb2f is verified up and running May 10 22:08:05.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-psrnd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3369' May 10 22:08:05.265: INFO: stderr: "" May 10 22:08:05.265: INFO: stdout: "true" May 10 22:08:05.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-psrnd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3369' May 10 22:08:05.369: INFO: stderr: "" May 10 22:08:05.369: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:08:05.369: INFO: validating pod update-demo-nautilus-psrnd May 10 22:08:05.373: INFO: got data: { "image": "nautilus.jpg" } May 10 22:08:05.373: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:08:05.373: INFO: update-demo-nautilus-psrnd is verified up and running STEP: using delete to clean up resources May 10 22:08:05.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3369' May 10 22:08:05.482: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:08:05.483: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 10 22:08:05.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3369' May 10 22:08:05.599: INFO: stderr: "No resources found in kubectl-3369 namespace.\n" May 10 22:08:05.599: INFO: stdout: "" May 10 22:08:05.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3369 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 10 22:08:05.714: INFO: stderr: "" May 10 22:08:05.714: INFO: stdout: "update-demo-nautilus-9jb2f\nupdate-demo-nautilus-psrnd\n" May 10 22:08:06.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3369' May 10 22:08:06.320: INFO: stderr: "No resources found in kubectl-3369 namespace.\n" May 10 22:08:06.320: INFO: stdout: "" May 10 22:08:06.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3369 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 10 22:08:06.414: INFO: stderr: "" May 10 22:08:06.414: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:06.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3369" for this suite. • [SLOW TEST:9.736 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":204,"skipped":3318,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:06.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 10 22:08:15.510: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9005 PodName:pod-sharedvolume-bb2d2c04-507f-41b5-b2ec-bfb091f1541e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 22:08:15.510: INFO: >>> kubeConfig: /root/.kube/config I0510 22:08:15.549967 6 log.go:172] (0xc0028fe160) (0xc00141b720) Create stream I0510 22:08:15.550001 6 log.go:172] (0xc0028fe160) (0xc00141b720) Stream added, broadcasting: 1 I0510 22:08:15.551796 6 log.go:172] (0xc0028fe160) Reply frame received for 1 I0510 22:08:15.551841 6 log.go:172] (0xc0028fe160) (0xc002320dc0) Create stream I0510 22:08:15.551857 6 log.go:172] (0xc0028fe160) (0xc002320dc0) Stream added, broadcasting: 3 I0510 22:08:15.552764 6 log.go:172] (0xc0028fe160) Reply frame received for 3 I0510 22:08:15.552791 6 log.go:172] (0xc0028fe160) (0xc000aa00a0) Create stream I0510 22:08:15.552801 6 log.go:172] (0xc0028fe160) (0xc000aa00a0) Stream added, broadcasting: 5 I0510 22:08:15.554184 6 log.go:172] (0xc0028fe160) Reply frame received for 5 I0510 22:08:15.615859 6 log.go:172] (0xc0028fe160) Data frame received for 5 I0510 22:08:15.615903 6 log.go:172] (0xc000aa00a0) (5) Data frame handling I0510 22:08:15.615934 6 log.go:172] (0xc0028fe160) Data frame received for 3 I0510 22:08:15.615943 6 log.go:172] (0xc002320dc0) (3) Data frame handling I0510 22:08:15.615966 6 log.go:172] (0xc002320dc0) (3) Data frame sent I0510 22:08:15.615988 6 log.go:172] (0xc0028fe160) Data frame received for 3 I0510 22:08:15.615998 6 log.go:172] (0xc002320dc0) (3) Data frame handling I0510 22:08:15.617350 6 log.go:172] (0xc0028fe160) Data frame received for 1 I0510 22:08:15.617406 6 log.go:172] (0xc00141b720) (1) Data frame handling I0510 22:08:15.617431 6 log.go:172] (0xc00141b720) (1) Data frame sent I0510 22:08:15.617456 6 log.go:172] (0xc0028fe160) (0xc00141b720) Stream removed, broadcasting: 1 I0510 22:08:15.617494 6 log.go:172] (0xc0028fe160) Go away received I0510 22:08:15.617578 6 log.go:172] (0xc0028fe160) (0xc00141b720) Stream removed, broadcasting: 1 I0510 22:08:15.617591 6 log.go:172] (0xc0028fe160) (0xc002320dc0) Stream removed, broadcasting: 3 I0510 22:08:15.617599 6 log.go:172] (0xc0028fe160) (0xc000aa00a0) Stream removed, broadcasting: 5 May 10 22:08:15.617: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:15.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9005" for this suite. • [SLOW TEST:9.204 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":205,"skipped":3321,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:15.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-ccca61d7-bdad-4407-b1ab-72f2313d5b4a STEP: Creating a pod to test consume secrets May 10 22:08:15.752: INFO: Waiting up to 5m0s for pod "pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131" in namespace "secrets-4132" to be "success or failure" May 10 22:08:15.758: INFO: Pod "pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131": Phase="Pending", Reason="", readiness=false. Elapsed: 5.827594ms May 10 22:08:17.762: INFO: Pod "pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009975646s May 10 22:08:19.766: INFO: Pod "pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014005963s May 10 22:08:21.771: INFO: Pod "pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018407383s STEP: Saw pod success May 10 22:08:21.771: INFO: Pod "pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131" satisfied condition "success or failure" May 10 22:08:21.773: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131 container secret-volume-test: STEP: delete the pod May 10 22:08:21.808: INFO: Waiting for pod pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131 to disappear May 10 22:08:21.834: INFO: Pod pod-secrets-7260dd61-bd80-4aee-83b1-ce9f37118131 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:21.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4132" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:21.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0510 22:08:23.001622 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 10 22:08:23.001: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:23.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9707" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":207,"skipped":3353,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:23.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 10 22:08:23.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8823' May 10 22:08:23.616: INFO: stderr: "" May 10 22:08:23.616: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 10 22:08:28.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8823 -o json' May 10 22:08:28.777: INFO: stderr: "" May 10 22:08:28.777: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-10T22:08:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8823\",\n \"resourceVersion\": \"15077046\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8823/pods/e2e-test-httpd-pod\",\n \"uid\": \"3d10afd1-59fd-46d7-8e34-68436f97a439\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lbld9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lbld9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lbld9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-10T22:08:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-10T22:08:28Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-10T22:08:28Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-10T22:08:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://bcc47c98d5eaa28e8e8f0a8a408c139426808ed477f28456bc4009f1af2de964\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-10T22:08:27Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.66\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.66\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-10T22:08:23Z\"\n }\n}\n" STEP: replace the image in the pod May 10 22:08:28.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8823' May 10 22:08:29.039: INFO: stderr: "" May 10 22:08:29.039: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 10 22:08:29.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8823' May 10 22:08:39.277: INFO: stderr: "" May 10 22:08:39.277: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:39.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8823" for this suite. • [SLOW TEST:16.294 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":208,"skipped":3353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:39.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 22:08:40.082: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 22:08:42.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745320, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745320, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745320, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745320, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 22:08:45.503: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:45.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4874" for this suite. STEP: Destroying namespace "webhook-4874-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.615 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":209,"skipped":3381,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:45.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:08:45.994: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 10 22:08:47.121: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1010" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":210,"skipped":3381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:48.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 22:08:48.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15" in namespace "projected-1310" to be "success or failure" May 10 22:08:49.124: INFO: Pod "downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15": Phase="Pending", Reason="", readiness=false. Elapsed: 188.795779ms May 10 22:08:51.143: INFO: Pod "downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207339352s May 10 22:08:53.146: INFO: Pod "downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.210420436s STEP: Saw pod success May 10 22:08:53.146: INFO: Pod "downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15" satisfied condition "success or failure" May 10 22:08:53.148: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15 container client-container: STEP: delete the pod May 10 22:08:53.179: INFO: Waiting for pod downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15 to disappear May 10 22:08:53.208: INFO: Pod downwardapi-volume-78a18a4a-ccbd-4390-95eb-d154dc97fe15 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:53.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1310" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3406,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:53.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:08:57.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2161" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3406,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:08:57.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-5c5q STEP: Creating a pod to test atomic-volume-subpath May 10 22:08:57.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5c5q" in namespace "subpath-4148" to be "success or failure" May 10 22:08:57.455: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685774ms May 10 22:08:59.459: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015223176s May 10 22:09:01.464: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 4.019850192s May 10 22:09:03.468: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 6.023658533s May 10 22:09:05.472: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 8.028443453s May 10 22:09:07.476: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 10.03228539s May 10 22:09:09.481: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 12.036737059s May 10 22:09:11.485: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 14.041060193s May 10 22:09:13.489: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 16.045275043s May 10 22:09:15.493: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 18.049459586s May 10 22:09:17.497: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 20.0532023s May 10 22:09:19.501: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Running", Reason="", readiness=true. Elapsed: 22.057463553s May 10 22:09:21.508: INFO: Pod "pod-subpath-test-configmap-5c5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064049609s STEP: Saw pod success May 10 22:09:21.508: INFO: Pod "pod-subpath-test-configmap-5c5q" satisfied condition "success or failure" May 10 22:09:21.510: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-5c5q container test-container-subpath-configmap-5c5q: STEP: delete the pod May 10 22:09:21.653: INFO: Waiting for pod pod-subpath-test-configmap-5c5q to disappear May 10 22:09:21.684: INFO: Pod pod-subpath-test-configmap-5c5q no longer exists STEP: Deleting pod pod-subpath-test-configmap-5c5q May 10 22:09:21.684: INFO: Deleting pod "pod-subpath-test-configmap-5c5q" in namespace "subpath-4148" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:09:21.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4148" for this suite. • [SLOW TEST:24.342 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":213,"skipped":3408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:09:21.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:09:21.802: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 10 22:09:24.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3189 create -f -' May 10 22:09:27.949: INFO: stderr: "" May 10 22:09:27.949: INFO: stdout: "e2e-test-crd-publish-openapi-4056-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 10 22:09:27.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3189 delete e2e-test-crd-publish-openapi-4056-crds test-cr' May 10 22:09:28.067: INFO: stderr: "" May 10 22:09:28.067: INFO: stdout: "e2e-test-crd-publish-openapi-4056-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 10 22:09:28.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3189 apply -f -' May 10 22:09:28.357: INFO: stderr: "" May 10 22:09:28.357: INFO: stdout: "e2e-test-crd-publish-openapi-4056-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 10 22:09:28.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3189 delete e2e-test-crd-publish-openapi-4056-crds test-cr' May 10 22:09:28.461: INFO: stderr: "" May 10 22:09:28.461: INFO: stdout: "e2e-test-crd-publish-openapi-4056-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 10 22:09:28.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4056-crds' May 10 22:09:28.686: INFO: stderr: "" May 10 22:09:28.686: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4056-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:09:31.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3189" for this suite. • [SLOW TEST:9.883 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":214,"skipped":3432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:09:31.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 22:09:31.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718" in namespace "downward-api-6184" to be "success or failure" May 10 22:09:31.655: INFO: Pod "downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718": Phase="Pending", Reason="", readiness=false. Elapsed: 3.404524ms May 10 22:09:33.659: INFO: Pod "downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007748911s May 10 22:09:35.714: INFO: Pod "downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062167736s STEP: Saw pod success May 10 22:09:35.714: INFO: Pod "downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718" satisfied condition "success or failure" May 10 22:09:35.716: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718 container client-container: STEP: delete the pod May 10 22:09:35.732: INFO: Waiting for pod downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718 to disappear May 10 22:09:35.736: INFO: Pod downwardapi-volume-ad8812dd-8e1a-4072-b046-15adf5570718 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:09:35.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6184" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3455,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:09:35.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:09:35.838: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d1fc4316-3b1d-471e-bfb0-20115e97f42d" in namespace "security-context-test-1529" to be "success or failure" May 10 22:09:35.850: INFO: Pod "busybox-user-65534-d1fc4316-3b1d-471e-bfb0-20115e97f42d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.466856ms May 10 22:09:37.904: INFO: Pod "busybox-user-65534-d1fc4316-3b1d-471e-bfb0-20115e97f42d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065648125s May 10 22:09:39.908: INFO: Pod "busybox-user-65534-d1fc4316-3b1d-471e-bfb0-20115e97f42d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069899837s May 10 22:09:39.908: INFO: Pod "busybox-user-65534-d1fc4316-3b1d-471e-bfb0-20115e97f42d" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:09:39.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1529" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3460,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:09:39.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-46bc1476-d8f1-493b-982c-c610e4d651ee STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:09:44.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2910" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3482,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:09:44.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 10 22:09:44.173: INFO: Waiting up to 5m0s for pod "downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67" in namespace "downward-api-383" to be "success or failure" May 10 22:09:44.176: INFO: Pod "downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.617579ms May 10 22:09:46.180: INFO: Pod "downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006913212s May 10 22:09:48.184: INFO: Pod "downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010604672s STEP: Saw pod success May 10 22:09:48.184: INFO: Pod "downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67" satisfied condition "success or failure" May 10 22:09:48.186: INFO: Trying to get logs from node jerma-worker pod downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67 container dapi-container: STEP: delete the pod May 10 22:09:48.270: INFO: Waiting for pod downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67 to disappear May 10 22:09:48.284: INFO: Pod downward-api-a40326fa-5a8a-474a-8de5-bbc6595bbf67 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:09:48.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-383" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3491,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:09:48.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5256 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 10 22:09:48.388: INFO: Found 0 stateful pods, waiting for 3 May 10 22:09:58.392: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 10 22:09:58.392: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 10 22:09:58.392: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 10 22:10:08.492: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 10 22:10:08.492: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 10 22:10:08.492: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 10 22:10:08.527: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 10 22:10:18.790: INFO: Updating stateful set ss2 May 10 22:10:18.799: INFO: Waiting for Pod statefulset-5256/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 10 22:10:28.955: INFO: Found 2 stateful pods, waiting for 3 May 10 22:10:38.959: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 10 22:10:38.959: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 10 22:10:38.959: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 10 22:10:38.982: INFO: Updating stateful set ss2 May 10 22:10:38.998: INFO: Waiting for Pod statefulset-5256/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 10 22:10:49.022: INFO: Waiting for Pod statefulset-5256/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 10 22:10:59.023: INFO: Updating stateful set ss2 May 10 22:10:59.067: INFO: Waiting for StatefulSet statefulset-5256/ss2 to complete update May 10 22:10:59.068: INFO: Waiting for Pod statefulset-5256/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 10 22:11:09.101: INFO: Deleting all statefulset in ns statefulset-5256 May 10 22:11:09.104: INFO: Scaling statefulset ss2 to 0 May 10 22:11:29.163: INFO: Waiting for statefulset status.replicas updated to 0 May 10 22:11:29.165: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:11:29.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5256" for this suite. • [SLOW TEST:100.897 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":219,"skipped":3493,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:11:29.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 10 22:11:29.291: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix011464388/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:11:29.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5879" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":220,"skipped":3495,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:11:29.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 10 22:11:29.504: INFO: Waiting up to 5m0s for pod "client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9" in namespace "containers-5572" to be "success or failure" May 10 22:11:29.547: INFO: Pod "client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9": Phase="Pending", Reason="", readiness=false. Elapsed: 43.396785ms May 10 22:11:31.965: INFO: Pod "client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460962259s May 10 22:11:33.968: INFO: Pod "client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.464436305s STEP: Saw pod success May 10 22:11:33.968: INFO: Pod "client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9" satisfied condition "success or failure" May 10 22:11:33.971: INFO: Trying to get logs from node jerma-worker pod client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9 container test-container: STEP: delete the pod May 10 22:11:34.239: INFO: Waiting for pod client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9 to disappear May 10 22:11:34.262: INFO: Pod client-containers-68fc65e2-64dd-4b9a-ba5c-abd5d6eff3b9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:11:34.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5572" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3506,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:11:34.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 10 22:11:34.567: INFO: Waiting up to 5m0s for pod "downward-api-6286d021-9615-4fb8-8441-f4ee56081f15" in namespace "downward-api-1765" to be "success or failure" May 10 22:11:34.675: INFO: Pod "downward-api-6286d021-9615-4fb8-8441-f4ee56081f15": Phase="Pending", Reason="", readiness=false. Elapsed: 107.496911ms May 10 22:11:36.977: INFO: Pod "downward-api-6286d021-9615-4fb8-8441-f4ee56081f15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.410095297s May 10 22:11:38.981: INFO: Pod "downward-api-6286d021-9615-4fb8-8441-f4ee56081f15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414088707s May 10 22:11:40.985: INFO: Pod "downward-api-6286d021-9615-4fb8-8441-f4ee56081f15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418006667s STEP: Saw pod success May 10 22:11:40.985: INFO: Pod "downward-api-6286d021-9615-4fb8-8441-f4ee56081f15" satisfied condition "success or failure" May 10 22:11:40.988: INFO: Trying to get logs from node jerma-worker pod downward-api-6286d021-9615-4fb8-8441-f4ee56081f15 container dapi-container: STEP: delete the pod May 10 22:11:41.048: INFO: Waiting for pod downward-api-6286d021-9615-4fb8-8441-f4ee56081f15 to disappear May 10 22:11:41.157: INFO: Pod downward-api-6286d021-9615-4fb8-8441-f4ee56081f15 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:11:41.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1765" for this suite. • [SLOW TEST:6.893 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:11:41.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-90c273dd-2860-43bd-a1b3-2e9b56d56a0b STEP: Creating configMap with name cm-test-opt-upd-6582479b-cd90-4ca2-81d1-6df644b1b288 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-90c273dd-2860-43bd-a1b3-2e9b56d56a0b STEP: Updating configmap cm-test-opt-upd-6582479b-cd90-4ca2-81d1-6df644b1b288 STEP: Creating configMap with name cm-test-opt-create-1e0c28d3-4af0-4f28-a324-e640a2c3f8c3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:13:00.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8602" for this suite. • [SLOW TEST:79.694 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3545,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:13:00.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5486 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5486 I0510 22:13:02.386846 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5486, replica count: 2 I0510 22:13:05.437492 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:13:08.437745 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:13:11.437957 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:13:14.438203 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:13:17.438403 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 22:13:17.438: INFO: Creating new exec pod May 10 22:13:24.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5486 execpodfshlg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 10 22:13:25.018: INFO: stderr: "I0510 22:13:24.935275 3178 log.go:172] (0xc0000c3290) (0xc000695b80) Create stream\nI0510 22:13:24.935333 3178 log.go:172] (0xc0000c3290) (0xc000695b80) Stream added, broadcasting: 1\nI0510 22:13:24.937002 3178 log.go:172] (0xc0000c3290) Reply frame received for 1\nI0510 22:13:24.937060 3178 log.go:172] (0xc0000c3290) (0xc000695d60) Create stream\nI0510 22:13:24.937080 3178 log.go:172] (0xc0000c3290) (0xc000695d60) Stream added, broadcasting: 3\nI0510 22:13:24.938321 3178 log.go:172] (0xc0000c3290) Reply frame received for 3\nI0510 22:13:24.938349 3178 log.go:172] (0xc0000c3290) (0xc0009a60a0) Create stream\nI0510 22:13:24.938357 3178 log.go:172] (0xc0000c3290) (0xc0009a60a0) Stream added, broadcasting: 5\nI0510 22:13:24.939113 3178 log.go:172] (0xc0000c3290) Reply frame received for 5\nI0510 22:13:25.008089 3178 log.go:172] (0xc0000c3290) Data frame received for 5\nI0510 22:13:25.008212 3178 log.go:172] (0xc0009a60a0) (5) Data frame handling\nI0510 22:13:25.008289 3178 log.go:172] (0xc0009a60a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0510 22:13:25.008759 3178 log.go:172] (0xc0000c3290) Data frame received for 5\nI0510 22:13:25.008790 3178 log.go:172] (0xc0009a60a0) (5) Data frame handling\nI0510 22:13:25.008803 3178 log.go:172] (0xc0009a60a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0510 22:13:25.008961 3178 log.go:172] (0xc0000c3290) Data frame received for 3\nI0510 22:13:25.008972 3178 log.go:172] (0xc000695d60) (3) Data frame handling\nI0510 22:13:25.009679 3178 log.go:172] (0xc0000c3290) Data frame received for 5\nI0510 22:13:25.009700 3178 log.go:172] (0xc0009a60a0) (5) Data frame handling\nI0510 22:13:25.011302 3178 log.go:172] (0xc0000c3290) Data frame received for 1\nI0510 22:13:25.011317 3178 log.go:172] (0xc000695b80) (1) Data frame handling\nI0510 22:13:25.011328 3178 log.go:172] (0xc000695b80) (1) Data frame sent\nI0510 22:13:25.011358 3178 log.go:172] (0xc0000c3290) (0xc000695b80) Stream removed, broadcasting: 1\nI0510 22:13:25.011724 3178 log.go:172] (0xc0000c3290) (0xc000695b80) Stream removed, broadcasting: 1\nI0510 22:13:25.011741 3178 log.go:172] (0xc0000c3290) (0xc000695d60) Stream removed, broadcasting: 3\nI0510 22:13:25.011754 3178 log.go:172] (0xc0000c3290) (0xc0009a60a0) Stream removed, broadcasting: 5\n" May 10 22:13:25.018: INFO: stdout: "" May 10 22:13:25.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5486 execpodfshlg -- /bin/sh -x -c nc -zv -t -w 2 10.98.19.242 80' May 10 22:13:25.232: INFO: stderr: "I0510 22:13:25.143713 3198 log.go:172] (0xc000968c60) (0xc000a08280) Create stream\nI0510 22:13:25.143760 3198 log.go:172] (0xc000968c60) (0xc000a08280) Stream added, broadcasting: 1\nI0510 22:13:25.148345 3198 log.go:172] (0xc000968c60) Reply frame received for 1\nI0510 22:13:25.148385 3198 log.go:172] (0xc000968c60) (0xc00058a6e0) Create stream\nI0510 22:13:25.148398 3198 log.go:172] (0xc000968c60) (0xc00058a6e0) Stream added, broadcasting: 3\nI0510 22:13:25.149556 3198 log.go:172] (0xc000968c60) Reply frame received for 3\nI0510 22:13:25.149592 3198 log.go:172] (0xc000968c60) (0xc0007c34a0) Create stream\nI0510 22:13:25.149604 3198 log.go:172] (0xc000968c60) (0xc0007c34a0) Stream added, broadcasting: 5\nI0510 22:13:25.150663 3198 log.go:172] (0xc000968c60) Reply frame received for 5\nI0510 22:13:25.224718 3198 log.go:172] (0xc000968c60) Data frame received for 5\nI0510 22:13:25.224742 3198 log.go:172] (0xc0007c34a0) (5) Data frame handling\nI0510 22:13:25.224752 3198 log.go:172] (0xc0007c34a0) (5) Data frame sent\nI0510 22:13:25.224759 3198 log.go:172] (0xc000968c60) Data frame received for 5\nI0510 22:13:25.224764 3198 log.go:172] (0xc0007c34a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.19.242 80\nConnection to 10.98.19.242 80 port [tcp/http] succeeded!\nI0510 22:13:25.224784 3198 log.go:172] (0xc000968c60) Data frame received for 3\nI0510 22:13:25.224790 3198 log.go:172] (0xc00058a6e0) (3) Data frame handling\nI0510 22:13:25.226441 3198 log.go:172] (0xc000968c60) Data frame received for 1\nI0510 22:13:25.226464 3198 log.go:172] (0xc000a08280) (1) Data frame handling\nI0510 22:13:25.226486 3198 log.go:172] (0xc000a08280) (1) Data frame sent\nI0510 22:13:25.226504 3198 log.go:172] (0xc000968c60) (0xc000a08280) Stream removed, broadcasting: 1\nI0510 22:13:25.226523 3198 log.go:172] (0xc000968c60) Go away received\nI0510 22:13:25.226930 3198 log.go:172] (0xc000968c60) (0xc000a08280) Stream removed, broadcasting: 1\nI0510 22:13:25.226958 3198 log.go:172] (0xc000968c60) (0xc00058a6e0) Stream removed, broadcasting: 3\nI0510 22:13:25.226969 3198 log.go:172] (0xc000968c60) (0xc0007c34a0) Stream removed, broadcasting: 5\n" May 10 22:13:25.232: INFO: stdout: "" May 10 22:13:25.232: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:13:25.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5486" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.047 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":224,"skipped":3556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:13:25.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-256/configmap-test-a1d87b26-7d05-4c21-a7c0-b0cf52a68d88 STEP: Creating a pod to test consume configMaps May 10 22:13:26.733: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095" in namespace "configmap-256" to be "success or failure" May 10 22:13:26.803: INFO: Pod "pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095": Phase="Pending", Reason="", readiness=false. Elapsed: 70.296956ms May 10 22:13:28.808: INFO: Pod "pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074443692s May 10 22:13:31.115: INFO: Pod "pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382173762s May 10 22:13:33.584: INFO: Pod "pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095": Phase="Running", Reason="", readiness=true. Elapsed: 6.850541625s May 10 22:13:35.653: INFO: Pod "pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.920200072s STEP: Saw pod success May 10 22:13:35.653: INFO: Pod "pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095" satisfied condition "success or failure" May 10 22:13:35.715: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095 container env-test: STEP: delete the pod May 10 22:13:36.429: INFO: Waiting for pod pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095 to disappear May 10 22:13:36.696: INFO: Pod pod-configmaps-9b0b2849-d4a2-4c2f-a3f8-1ce048070095 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:13:36.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-256" for this suite. • [SLOW TEST:10.846 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3579,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:13:36.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 10 22:13:38.109: INFO: Waiting up to 5m0s for pod "var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9" in namespace "var-expansion-8983" to be "success or failure" May 10 22:13:38.246: INFO: Pod "var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9": Phase="Pending", Reason="", readiness=false. Elapsed: 136.926003ms May 10 22:13:40.250: INFO: Pod "var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140314647s May 10 22:13:42.366: INFO: Pod "var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256174286s May 10 22:13:44.370: INFO: Pod "var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9": Phase="Running", Reason="", readiness=true. Elapsed: 6.260105234s May 10 22:13:46.374: INFO: Pod "var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.264224298s STEP: Saw pod success May 10 22:13:46.374: INFO: Pod "var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9" satisfied condition "success or failure" May 10 22:13:46.377: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9 container dapi-container: STEP: delete the pod May 10 22:13:46.483: INFO: Waiting for pod var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9 to disappear May 10 22:13:46.577: INFO: Pod var-expansion-2d00af47-5d33-4c1b-81bd-94d9b43058a9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:13:46.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8983" for this suite. • [SLOW TEST:9.833 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3579,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:13:46.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:13:46.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8379" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":227,"skipped":3585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:13:46.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 10 22:13:46.877: INFO: PodSpec: initContainers in spec.initContainers May 10 22:14:40.942: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3cc349a9-2c74-455e-b415-69129ecd3f59", GenerateName:"", Namespace:"init-container-477", SelfLink:"/api/v1/namespaces/init-container-477/pods/pod-init-3cc349a9-2c74-455e-b415-69129ecd3f59", UID:"0671eb4f-e857-4647-8993-d254fefb5a51", ResourceVersion:"15078996", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724745626, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"877427752"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cvfrk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0066d1700), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cvfrk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cvfrk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cvfrk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005eb1cb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00238b080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005eb1d40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005eb1d60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005eb1d68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005eb1d6c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745627, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745627, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745627, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745626, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.247", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.247"}}, StartTime:(*v1.Time)(0xc002929360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0029293a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e35180)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://bdc5ff9203b793ddfc0b41029ddc1a2d87682a8eef74c8f975cb31b0cf2b3941", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0029293c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002929380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc005eb1def)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:14:40.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-477" for this suite. • [SLOW TEST:54.204 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":228,"skipped":3636,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:14:40.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:14:41.436: INFO: Creating deployment "webserver-deployment" May 10 22:14:41.440: INFO: Waiting for observed generation 1 May 10 22:14:43.457: INFO: Waiting for all required pods to come up May 10 22:14:43.461: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 10 22:14:53.469: INFO: Waiting for deployment "webserver-deployment" to complete May 10 22:14:53.475: INFO: Updating deployment "webserver-deployment" with a non-existent image May 10 22:14:53.482: INFO: Updating deployment webserver-deployment May 10 22:14:53.482: INFO: Waiting for observed generation 2 May 10 22:14:55.533: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 10 22:14:55.590: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 10 22:14:55.593: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 10 22:14:55.602: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 10 22:14:55.602: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 10 22:14:55.604: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 10 22:14:55.609: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 10 22:14:55.609: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 10 22:14:55.614: INFO: Updating deployment webserver-deployment May 10 22:14:55.614: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 10 22:14:56.083: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 10 22:14:56.237: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 10 22:14:58.700: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1639 /apis/apps/v1/namespaces/deployment-1639/deployments/webserver-deployment 3565939c-9ca1-42d7-86f4-b98cf4f61d3f 15079288 3 2020-05-10 22:14:41 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00254b738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-10 22:14:56 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-10 22:14:56 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 10 22:14:59.109: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-1639 /apis/apps/v1/namespaces/deployment-1639/replicasets/webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 15079286 3 2020-05-10 22:14:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3565939c-9ca1-42d7-86f4-b98cf4f61d3f 0xc001d7d677 0xc001d7d678}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d7d6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 10 22:14:59.109: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 10 22:14:59.109: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-1639 /apis/apps/v1/namespaces/deployment-1639/replicasets/webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 15079273 3 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3565939c-9ca1-42d7-86f4-b98cf4f61d3f 0xc001d7d5b7 0xc001d7d5b8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d7d618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 10 22:14:59.429: INFO: Pod "webserver-deployment-595b5b9587-24b6n" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-24b6n webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-24b6n 4fd1fe30-7b3d-4aaa-91bd-ebfba0c4603b 15079154 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc001d7db97 0xc001d7db98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.251,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2bfbead11b5cd642667e0e9ddb003d0b972fe4846e1135322abdc8deb1077dc2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.429: INFO: Pod "webserver-deployment-595b5b9587-4mptp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4mptp webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-4mptp 19ed2dd3-2a0b-402b-8c6e-4cf2b2c62b8a 15079157 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc001d7dd17 0xc001d7dd18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.252,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d00e65fe3b3a9effb9319ddc62557c17905e843f4a7a9437f5577802a3d2eae6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.430: INFO: Pod "webserver-deployment-595b5b9587-5kbwg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5kbwg webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-5kbwg 6bf55e1b-c4e1-4ba4-887f-d26d73c81a56 15079326 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afc7b7 0xc002afc7b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.430: INFO: Pod "webserver-deployment-595b5b9587-5lqkp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5lqkp webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-5lqkp bac7d9c9-f9e9-43e4-81f7-a15e21b5e5e0 15079128 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afca67 0xc002afca68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.86,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://625ea0576dac50bc4d86978653028b7e3ab3633347fbdf61c569a0162b9bc975,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.430: INFO: Pod "webserver-deployment-595b5b9587-6pgld" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6pgld webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-6pgld 85b1d8ec-9ce0-4fc8-9571-a3a6f83295a2 15079101 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afccf7 0xc002afccf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.84,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e17d6b48df30bb1ef5b84a7fcf4342c74ba18e549ca1792d8aec5c2f85a778af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.430: INFO: Pod "webserver-deployment-595b5b9587-7fr4l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7fr4l webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-7fr4l 586f10b9-aa85-480e-b0a7-2d6200a1b5df 15079321 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afcfd7 0xc002afcfd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.431: INFO: Pod "webserver-deployment-595b5b9587-7gdv9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7gdv9 webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-7gdv9 6131cc67-aa69-4c47-a019-9828504983a2 15079348 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afd237 0xc002afd238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.431: INFO: Pod "webserver-deployment-595b5b9587-7tf7d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7tf7d webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-7tf7d 88c231c9-0c97-4545-86e4-56bcda92f8b0 15079274 0 2020-05-10 22:14:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afd477 0xc002afd478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.431: INFO: Pod "webserver-deployment-595b5b9587-8rfrm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8rfrm webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-8rfrm bca27dd8-b828-47ea-afe3-91e5663a5c00 15079295 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afd6f7 0xc002afd6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.431: INFO: Pod "webserver-deployment-595b5b9587-bwgdr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bwgdr webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-bwgdr 68ca3180-7cce-4609-abf4-212c184eb731 15079290 0 2020-05-10 22:14:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afd927 0xc002afd928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.431: INFO: Pod "webserver-deployment-595b5b9587-d4t7h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4t7h webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-d4t7h bb144fb9-850d-4790-a31c-da7514c21fbd 15079305 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afdb67 0xc002afdb68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.432: INFO: Pod "webserver-deployment-595b5b9587-g6ctp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g6ctp webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-g6ctp 2d0bc1b5-0456-4f40-96b9-d6b86d5f0326 15079138 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afdd67 0xc002afdd68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.248,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://96a75f872c16a870b9bc34a12f64d05b0f9c5bd45384d1aaa9a4a6658ae3221c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.432: INFO: Pod "webserver-deployment-595b5b9587-h8cdc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h8cdc webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-h8cdc 4c8e8e6f-d55b-47e8-8670-533178446274 15079315 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002afdee7 0xc002afdee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.432: INFO: Pod "webserver-deployment-595b5b9587-k65r8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k65r8 webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-k65r8 94270d01-57df-421d-b050-091090f3e176 15079289 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002d40057 0xc002d40058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.432: INFO: Pod "webserver-deployment-595b5b9587-lh7h4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lh7h4 webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-lh7h4 f47ee8b6-20ca-4f4c-ac9a-039e3c5049f0 15079328 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002d401b7 0xc002d401b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.432: INFO: Pod "webserver-deployment-595b5b9587-nnkdj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nnkdj webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-nnkdj 57c22043-0796-4a1b-aa4e-57f5c4ca25c8 15079277 0 2020-05-10 22:14:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002d40317 0xc002d40318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.433: INFO: Pod "webserver-deployment-595b5b9587-ptx8g" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ptx8g webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-ptx8g a8775a42-76c9-4f8c-a13e-8312bc6d92fe 15079097 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002d40577 0xc002d40578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.83,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c5a2222e988f3f60b56e17cfcc5c6df447a52cca69f9717fa1b6f71b145a7e84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.433: INFO: Pod "webserver-deployment-595b5b9587-q9d4h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q9d4h webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-q9d4h 571a451f-f3d3-4f83-93be-190f8a51fb07 15079141 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002d407b7 0xc002d407b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.249,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9967c4f0625b26134ceb55111389538dea598df695dd938d468c723f74c08fe3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.433: INFO: Pod "webserver-deployment-595b5b9587-rtb5h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rtb5h webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-rtb5h 12138270-2468-4188-a840-3ee56d272fe3 15079312 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002d40a17 0xc002d40a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.433: INFO: Pod "webserver-deployment-595b5b9587-ztqwz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ztqwz webserver-deployment-595b5b9587- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-595b5b9587-ztqwz f7b6c2e5-3c3e-47db-8171-091dbb8483be 15079160 0 2020-05-10 22:14:41 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2f2b1a7a-138a-4243-b4fe-2735b8751484 0xc002d40cc7 0xc002d40cc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.250,StartTime:2020-05-10 22:14:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:14:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://04f73995e0a51e62a35d5c053c6b2bab11f1068c4090dd00c932e716a3e4025f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.433: INFO: Pod "webserver-deployment-c7997dcc8-9bl74" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9bl74 webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-9bl74 d115945c-607b-4545-aa76-b035f96c9cfc 15079309 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d40f77 0xc002d40f78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.434: INFO: Pod "webserver-deployment-c7997dcc8-c2ccc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c2ccc webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-c2ccc a5a77a3a-118e-47a2-99a3-3f17529c97c0 15079214 0 2020-05-10 22:14:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d411b7 0xc002d411b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.434: INFO: Pod "webserver-deployment-c7997dcc8-gvpc9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gvpc9 webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-gvpc9 84a0de8d-e249-4057-831f-8c539715cfcb 15079191 0 2020-05-10 22:14:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d414b7 0xc002d414b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.434: INFO: Pod "webserver-deployment-c7997dcc8-jchgr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jchgr webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-jchgr 308f623c-5c73-46a9-98cc-b58d94db95a9 15079318 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d41637 0xc002d41638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.434: INFO: Pod "webserver-deployment-c7997dcc8-klkgb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-klkgb webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-klkgb 2e949ef2-84fa-4cf1-8f7e-8ae337271f12 15079361 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d417c7 0xc002d417c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.434: INFO: Pod "webserver-deployment-c7997dcc8-ltmph" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ltmph webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-ltmph f195c118-534e-43e8-9405-6016bcab1821 15079349 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d41947 0xc002d41948}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.434: INFO: Pod "webserver-deployment-c7997dcc8-p5wgz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p5wgz webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-p5wgz d3905deb-e750-4f0f-8839-60a08f635d7c 15079368 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d41c37 0xc002d41c38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.435: INFO: Pod "webserver-deployment-c7997dcc8-pnxnf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pnxnf webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-pnxnf 835a36cd-6146-43e1-9c2a-753062c4d0bd 15079369 0 2020-05-10 22:14:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc002d41ee7 0xc002d41ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.253,StartTime:2020-05-10 22:14:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.435: INFO: Pod "webserver-deployment-c7997dcc8-sg5j5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sg5j5 webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-sg5j5 0c4f8189-da98-4ab6-8bbe-fa929d7f0795 15079206 0 2020-05-10 22:14:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc00087c577 0xc00087c578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.435: INFO: Pod "webserver-deployment-c7997dcc8-sldtv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sldtv webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-sldtv 784cc35c-c743-4f4b-8229-1302392394de 15079298 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc00087c8c7 0xc00087c8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.435: INFO: Pod "webserver-deployment-c7997dcc8-t6bsr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t6bsr webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-t6bsr 506e6b69-9fb0-4ceb-9926-87e3a5a0622b 15079360 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc00087ca47 0xc00087ca48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.435: INFO: Pod "webserver-deployment-c7997dcc8-w28b5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w28b5 webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-w28b5 69f5b2a6-f9e7-40ba-b658-913f0a18d5b0 15079355 0 2020-05-10 22:14:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc00087cbc7 0xc00087cbc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:14:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 10 22:14:59.435: INFO: Pod "webserver-deployment-c7997dcc8-xq8jx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xq8jx webserver-deployment-c7997dcc8- deployment-1639 /api/v1/namespaces/deployment-1639/pods/webserver-deployment-c7997dcc8-xq8jx 78d9fcdb-f7e9-4a55-ae95-07113111dcc7 15079354 0 2020-05-10 22:14:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a14f7bf-0340-4dbc-a7fd-334f2d6ce5cc 0xc00087cd57 0xc00087cd58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2p9sl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2p9sl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2p9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:14:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.88,StartTime:2020-05-10 22:14:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:14:59.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1639" for this suite. • [SLOW TEST:19.716 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":229,"skipped":3643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:15:00.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 10 22:15:02.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7152' May 10 22:15:03.427: INFO: stderr: "" May 10 22:15:03.427: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 10 22:15:04.432: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:04.432: INFO: Found 0 / 1 May 10 22:15:05.742: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:05.742: INFO: Found 0 / 1 May 10 22:15:07.287: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:07.287: INFO: Found 0 / 1 May 10 22:15:07.603: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:07.603: INFO: Found 0 / 1 May 10 22:15:08.818: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:08.818: INFO: Found 0 / 1 May 10 22:15:09.892: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:09.893: INFO: Found 0 / 1 May 10 22:15:11.148: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:11.148: INFO: Found 0 / 1 May 10 22:15:12.056: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:12.057: INFO: Found 0 / 1 May 10 22:15:12.957: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:12.957: INFO: Found 0 / 1 May 10 22:15:13.550: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:13.550: INFO: Found 0 / 1 May 10 22:15:14.651: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:14.651: INFO: Found 0 / 1 May 10 22:15:15.749: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:15.749: INFO: Found 0 / 1 May 10 22:15:16.850: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:16.850: INFO: Found 0 / 1 May 10 22:15:17.604: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:17.604: INFO: Found 0 / 1 May 10 22:15:18.772: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:18.772: INFO: Found 0 / 1 May 10 22:15:19.555: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:19.555: INFO: Found 0 / 1 May 10 22:15:20.650: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:20.650: INFO: Found 1 / 1 May 10 22:15:20.650: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 10 22:15:20.747: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:20.747: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 10 22:15:20.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-sflkk --namespace=kubectl-7152 -p {"metadata":{"annotations":{"x":"y"}}}' May 10 22:15:21.028: INFO: stderr: "" May 10 22:15:21.028: INFO: stdout: "pod/agnhost-master-sflkk patched\n" STEP: checking annotations May 10 22:15:21.238: INFO: Selector matched 1 pods for map[app:agnhost] May 10 22:15:21.238: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:15:21.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7152" for this suite. • [SLOW TEST:20.620 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":230,"skipped":3686,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:15:21.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-cd78e25c-fda1-4ac8-ba14-dea50221064d STEP: Creating a pod to test consume configMaps May 10 22:15:22.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6" in namespace "configmap-1393" to be "success or failure" May 10 22:15:22.465: INFO: Pod "pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6": Phase="Pending", Reason="", readiness=false. Elapsed: 128.969742ms May 10 22:15:24.639: INFO: Pod "pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30312352s May 10 22:15:26.849: INFO: Pod "pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.512448842s May 10 22:15:28.903: INFO: Pod "pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6": Phase="Running", Reason="", readiness=true. Elapsed: 6.566933038s May 10 22:15:31.068: INFO: Pod "pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.732126811s STEP: Saw pod success May 10 22:15:31.068: INFO: Pod "pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6" satisfied condition "success or failure" May 10 22:15:31.196: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6 container configmap-volume-test: STEP: delete the pod May 10 22:15:31.369: INFO: Waiting for pod pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6 to disappear May 10 22:15:31.402: INFO: Pod pod-configmaps-68d033b1-8ad1-428e-bb1a-deed200678d6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:15:31.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1393" for this suite. • [SLOW TEST:10.125 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3686,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:15:31.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 22:15:32.351: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 22:15:34.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:15:36.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 22:15:39.466: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:15:39.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3451" for this suite. STEP: Destroying namespace "webhook-3451-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.270 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":232,"skipped":3693,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:15:39.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 10 22:15:39.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-773' May 10 22:15:40.034: INFO: stderr: "" May 10 22:15:40.034: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 10 22:15:40.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-773' May 10 22:15:40.165: INFO: stderr: "" May 10 22:15:40.165: INFO: stdout: "update-demo-nautilus-4hdcl update-demo-nautilus-j5chb " May 10 22:15:40.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:40.431: INFO: stderr: "" May 10 22:15:40.431: INFO: stdout: "" May 10 22:15:40.431: INFO: update-demo-nautilus-4hdcl is created but not running May 10 22:15:45.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-773' May 10 22:15:45.546: INFO: stderr: "" May 10 22:15:45.546: INFO: stdout: "update-demo-nautilus-4hdcl update-demo-nautilus-j5chb " May 10 22:15:45.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:45.696: INFO: stderr: "" May 10 22:15:45.696: INFO: stdout: "true" May 10 22:15:45.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:45.794: INFO: stderr: "" May 10 22:15:45.794: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:15:45.794: INFO: validating pod update-demo-nautilus-4hdcl May 10 22:15:45.798: INFO: got data: { "image": "nautilus.jpg" } May 10 22:15:45.798: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:15:45.798: INFO: update-demo-nautilus-4hdcl is verified up and running May 10 22:15:45.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5chb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:45.921: INFO: stderr: "" May 10 22:15:45.921: INFO: stdout: "true" May 10 22:15:45.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j5chb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:46.029: INFO: stderr: "" May 10 22:15:46.029: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:15:46.029: INFO: validating pod update-demo-nautilus-j5chb May 10 22:15:46.033: INFO: got data: { "image": "nautilus.jpg" } May 10 22:15:46.033: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:15:46.033: INFO: update-demo-nautilus-j5chb is verified up and running STEP: scaling down the replication controller May 10 22:15:46.034: INFO: scanned /root for discovery docs: May 10 22:15:46.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-773' May 10 22:15:47.335: INFO: stderr: "" May 10 22:15:47.335: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 10 22:15:47.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-773' May 10 22:15:47.494: INFO: stderr: "" May 10 22:15:47.495: INFO: stdout: "update-demo-nautilus-4hdcl update-demo-nautilus-j5chb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 10 22:15:52.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-773' May 10 22:15:52.597: INFO: stderr: "" May 10 22:15:52.597: INFO: stdout: "update-demo-nautilus-4hdcl " May 10 22:15:52.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:52.694: INFO: stderr: "" May 10 22:15:52.694: INFO: stdout: "true" May 10 22:15:52.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:52.786: INFO: stderr: "" May 10 22:15:52.786: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:15:52.786: INFO: validating pod update-demo-nautilus-4hdcl May 10 22:15:52.790: INFO: got data: { "image": "nautilus.jpg" } May 10 22:15:52.790: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:15:52.790: INFO: update-demo-nautilus-4hdcl is verified up and running STEP: scaling up the replication controller May 10 22:15:52.793: INFO: scanned /root for discovery docs: May 10 22:15:52.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-773' May 10 22:15:53.926: INFO: stderr: "" May 10 22:15:53.926: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 10 22:15:53.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-773' May 10 22:15:54.039: INFO: stderr: "" May 10 22:15:54.039: INFO: stdout: "update-demo-nautilus-4hdcl update-demo-nautilus-9drtf " May 10 22:15:54.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:54.140: INFO: stderr: "" May 10 22:15:54.140: INFO: stdout: "true" May 10 22:15:54.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:54.246: INFO: stderr: "" May 10 22:15:54.246: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:15:54.246: INFO: validating pod update-demo-nautilus-4hdcl May 10 22:15:54.249: INFO: got data: { "image": "nautilus.jpg" } May 10 22:15:54.249: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:15:54.249: INFO: update-demo-nautilus-4hdcl is verified up and running May 10 22:15:54.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9drtf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:54.515: INFO: stderr: "" May 10 22:15:54.515: INFO: stdout: "" May 10 22:15:54.515: INFO: update-demo-nautilus-9drtf is created but not running May 10 22:15:59.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-773' May 10 22:15:59.618: INFO: stderr: "" May 10 22:15:59.618: INFO: stdout: "update-demo-nautilus-4hdcl update-demo-nautilus-9drtf " May 10 22:15:59.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:59.805: INFO: stderr: "" May 10 22:15:59.805: INFO: stdout: "true" May 10 22:15:59.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hdcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:15:59.904: INFO: stderr: "" May 10 22:15:59.904: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:15:59.905: INFO: validating pod update-demo-nautilus-4hdcl May 10 22:15:59.908: INFO: got data: { "image": "nautilus.jpg" } May 10 22:15:59.908: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:15:59.908: INFO: update-demo-nautilus-4hdcl is verified up and running May 10 22:15:59.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9drtf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:16:00.004: INFO: stderr: "" May 10 22:16:00.004: INFO: stdout: "true" May 10 22:16:00.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9drtf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-773' May 10 22:16:00.093: INFO: stderr: "" May 10 22:16:00.093: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:16:00.093: INFO: validating pod update-demo-nautilus-9drtf May 10 22:16:00.097: INFO: got data: { "image": "nautilus.jpg" } May 10 22:16:00.097: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:16:00.097: INFO: update-demo-nautilus-9drtf is verified up and running STEP: using delete to clean up resources May 10 22:16:00.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-773' May 10 22:16:00.224: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:16:00.224: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 10 22:16:00.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-773' May 10 22:16:00.358: INFO: stderr: "No resources found in kubectl-773 namespace.\n" May 10 22:16:00.358: INFO: stdout: "" May 10 22:16:00.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-773 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 10 22:16:00.504: INFO: stderr: "" May 10 22:16:00.504: INFO: stdout: "update-demo-nautilus-4hdcl\nupdate-demo-nautilus-9drtf\n" May 10 22:16:01.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-773' May 10 22:16:01.097: INFO: stderr: "No resources found in kubectl-773 namespace.\n" May 10 22:16:01.098: INFO: stdout: "" May 10 22:16:01.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-773 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 10 22:16:01.202: INFO: stderr: "" May 10 22:16:01.202: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:16:01.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-773" for this suite. • [SLOW TEST:21.514 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":233,"skipped":3710,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:16:01.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 10 22:16:05.649: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:16:05.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5204" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3721,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:16:05.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 10 22:16:05.773: INFO: Waiting up to 5m0s for pod "pod-b151d036-f252-48c6-83a9-43008c050989" in namespace "emptydir-2299" to be "success or failure" May 10 22:16:05.951: INFO: Pod "pod-b151d036-f252-48c6-83a9-43008c050989": Phase="Pending", Reason="", readiness=false. Elapsed: 177.830513ms May 10 22:16:08.005: INFO: Pod "pod-b151d036-f252-48c6-83a9-43008c050989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231484062s May 10 22:16:10.008: INFO: Pod "pod-b151d036-f252-48c6-83a9-43008c050989": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235082646s STEP: Saw pod success May 10 22:16:10.008: INFO: Pod "pod-b151d036-f252-48c6-83a9-43008c050989" satisfied condition "success or failure" May 10 22:16:10.011: INFO: Trying to get logs from node jerma-worker pod pod-b151d036-f252-48c6-83a9-43008c050989 container test-container: STEP: delete the pod May 10 22:16:10.058: INFO: Waiting for pod pod-b151d036-f252-48c6-83a9-43008c050989 to disappear May 10 22:16:10.064: INFO: Pod pod-b151d036-f252-48c6-83a9-43008c050989 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:16:10.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2299" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3726,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:16:10.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 10 22:16:10.139: INFO: Waiting up to 5m0s for pod "pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7" in namespace "emptydir-7051" to be "success or failure" May 10 22:16:10.157: INFO: Pod "pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.147152ms May 10 22:16:12.184: INFO: Pod "pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045805205s May 10 22:16:14.189: INFO: Pod "pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050502209s STEP: Saw pod success May 10 22:16:14.189: INFO: Pod "pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7" satisfied condition "success or failure" May 10 22:16:14.192: INFO: Trying to get logs from node jerma-worker pod pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7 container test-container: STEP: delete the pod May 10 22:16:14.244: INFO: Waiting for pod pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7 to disappear May 10 22:16:14.253: INFO: Pod pod-218d27ae-1c0a-4b27-8185-f96cf8e4dab7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:16:14.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7051" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3741,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:16:14.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 10 22:16:14.295: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:16:21.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9720" for this suite. • [SLOW TEST:7.781 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":237,"skipped":3753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:16:22.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-0db31bdc-c600-473f-bcc4-1121110f5a22 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:16:22.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2303" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":238,"skipped":3786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:16:22.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4021.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4021.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4021.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4021.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4021.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.141.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.141.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.141.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.141.6_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4021.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4021.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4021.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4021.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4021.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4021.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.141.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.141.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.141.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.141.6_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 10 22:16:30.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.388: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.391: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.394: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.414: INFO: Unable to read jessie_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:30.438: INFO: Lookups using dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d failed for: [wheezy_udp@dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_udp@dns-test-service.dns-4021.svc.cluster.local jessie_tcp@dns-test-service.dns-4021.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local] May 10 22:16:35.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.448: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.454: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.473: INFO: Unable to read jessie_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.476: INFO: Unable to read jessie_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.480: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.482: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:35.495: INFO: Lookups using dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d failed for: [wheezy_udp@dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_udp@dns-test-service.dns-4021.svc.cluster.local jessie_tcp@dns-test-service.dns-4021.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local] May 10 22:16:40.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.463: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.493: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.599: INFO: Unable to read jessie_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.606: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:40.629: INFO: Lookups using dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d failed for: [wheezy_udp@dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_udp@dns-test-service.dns-4021.svc.cluster.local jessie_tcp@dns-test-service.dns-4021.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local] May 10 22:16:45.442: INFO: Unable to read wheezy_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.453: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.456: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.500: INFO: Unable to read jessie_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.503: INFO: Unable to read jessie_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.506: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.508: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:45.544: INFO: Lookups using dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d failed for: [wheezy_udp@dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_udp@dns-test-service.dns-4021.svc.cluster.local jessie_tcp@dns-test-service.dns-4021.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local] May 10 22:16:50.443: INFO: Unable to read wheezy_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.457: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.463: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.487: INFO: Unable to read jessie_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.490: INFO: Unable to read jessie_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.492: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.496: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:50.532: INFO: Lookups using dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d failed for: [wheezy_udp@dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_udp@dns-test-service.dns-4021.svc.cluster.local jessie_tcp@dns-test-service.dns-4021.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local] May 10 22:16:55.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.450: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.452: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.470: INFO: Unable to read jessie_udp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.473: INFO: Unable to read jessie_tcp@dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.475: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local from pod dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d: the server could not find the requested resource (get pods dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d) May 10 22:16:55.494: INFO: Lookups using dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d failed for: [wheezy_udp@dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@dns-test-service.dns-4021.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_udp@dns-test-service.dns-4021.svc.cluster.local jessie_tcp@dns-test-service.dns-4021.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4021.svc.cluster.local] May 10 22:17:00.518: INFO: DNS probes using dns-4021/dns-test-36274089-2bae-4ab7-83cf-2664ed611f9d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:00.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4021" for this suite. • [SLOW TEST:39.225 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":239,"skipped":3814,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:01.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:18.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8542" for this suite. STEP: Destroying namespace "nsdeletetest-4178" for this suite. May 10 22:17:18.821: INFO: Namespace nsdeletetest-4178 was already deleted STEP: Destroying namespace "nsdeletetest-9723" for this suite. • [SLOW TEST:17.438 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":240,"skipped":3839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:18.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 10 22:17:19.360: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 10 22:17:21.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:17:23.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724745839, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 22:17:26.482: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:17:26.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:27.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2016" for this suite. STEP: Destroying namespace "webhook-2016-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.897 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":241,"skipped":3862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:27.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 10 22:17:27.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5028' May 10 22:17:28.137: INFO: stderr: "" May 10 22:17:28.137: INFO: stdout: "pod/pause created\n" May 10 22:17:28.137: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 10 22:17:28.138: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5028" to be "running and ready" May 10 22:17:28.210: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 72.030078ms May 10 22:17:30.214: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07633449s May 10 22:17:32.217: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.079800042s May 10 22:17:32.217: INFO: Pod "pause" satisfied condition "running and ready" May 10 22:17:32.217: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 10 22:17:32.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5028' May 10 22:17:32.328: INFO: stderr: "" May 10 22:17:32.328: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 10 22:17:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5028' May 10 22:17:32.427: INFO: stderr: "" May 10 22:17:32.427: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 10 22:17:32.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5028' May 10 22:17:32.533: INFO: stderr: "" May 10 22:17:32.533: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 10 22:17:32.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5028' May 10 22:17:32.619: INFO: stderr: "" May 10 22:17:32.620: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 10 22:17:32.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5028' May 10 22:17:32.774: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:17:32.774: INFO: stdout: "pod \"pause\" force deleted\n" May 10 22:17:32.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5028' May 10 22:17:32.899: INFO: stderr: "No resources found in kubectl-5028 namespace.\n" May 10 22:17:32.900: INFO: stdout: "" May 10 22:17:32.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5028 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 10 22:17:33.117: INFO: stderr: "" May 10 22:17:33.117: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:33.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5028" for this suite. • [SLOW TEST:5.516 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":242,"skipped":3894,"failed":0} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:33.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:17:33.464: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9134 I0510 22:17:33.476913 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9134, replica count: 1 I0510 22:17:34.527319 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:17:35.527536 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:17:36.527751 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 10 22:17:36.701: INFO: Created: latency-svc-xgcrf May 10 22:17:36.708: INFO: Got endpoints: latency-svc-xgcrf [80.669167ms] May 10 22:17:36.743: INFO: Created: latency-svc-btrwc May 10 22:17:36.757: INFO: Got endpoints: latency-svc-btrwc [47.723797ms] May 10 22:17:36.779: INFO: Created: latency-svc-f94sl May 10 22:17:36.794: INFO: Got endpoints: latency-svc-f94sl [85.880605ms] May 10 22:17:36.839: INFO: Created: latency-svc-ttl2z May 10 22:17:36.869: INFO: Got endpoints: latency-svc-ttl2z [160.54612ms] May 10 22:17:36.869: INFO: Created: latency-svc-r9wpk May 10 22:17:36.882: INFO: Got endpoints: latency-svc-r9wpk [174.065721ms] May 10 22:17:37.000: INFO: Created: latency-svc-bk5b7 May 10 22:17:37.003: INFO: Got endpoints: latency-svc-bk5b7 [294.135261ms] May 10 22:17:37.036: INFO: Created: latency-svc-sdxxk May 10 22:17:37.053: INFO: Got endpoints: latency-svc-sdxxk [344.030994ms] May 10 22:17:37.078: INFO: Created: latency-svc-dg725 May 10 22:17:37.095: INFO: Got endpoints: latency-svc-dg725 [386.411635ms] May 10 22:17:37.156: INFO: Created: latency-svc-hc6dk May 10 22:17:37.163: INFO: Got endpoints: latency-svc-hc6dk [454.131431ms] May 10 22:17:37.211: INFO: Created: latency-svc-nbjq6 May 10 22:17:37.245: INFO: Got endpoints: latency-svc-nbjq6 [536.376824ms] May 10 22:17:37.300: INFO: Created: latency-svc-rph4r May 10 22:17:37.303: INFO: Got endpoints: latency-svc-rph4r [594.483992ms] May 10 22:17:37.791: INFO: Created: latency-svc-6nldx May 10 22:17:37.833: INFO: Got endpoints: latency-svc-6nldx [1.124802942s] May 10 22:17:37.864: INFO: Created: latency-svc-vqmsr May 10 22:17:37.881: INFO: Got endpoints: latency-svc-vqmsr [1.172621043s] May 10 22:17:37.941: INFO: Created: latency-svc-zmswf May 10 22:17:37.956: INFO: Got endpoints: latency-svc-zmswf [1.246916056s] May 10 22:17:37.983: INFO: Created: latency-svc-q7mrf May 10 22:17:37.994: INFO: Got endpoints: latency-svc-q7mrf [1.284968564s] May 10 22:17:38.014: INFO: Created: latency-svc-r5kvw May 10 22:17:38.024: INFO: Got endpoints: latency-svc-r5kvw [1.31510173s] May 10 22:17:38.072: INFO: Created: latency-svc-spgww May 10 22:17:38.076: INFO: Got endpoints: latency-svc-spgww [1.318993603s] May 10 22:17:38.099: INFO: Created: latency-svc-k4kcw May 10 22:17:38.114: INFO: Got endpoints: latency-svc-k4kcw [1.320177985s] May 10 22:17:38.134: INFO: Created: latency-svc-2ns5q May 10 22:17:38.160: INFO: Got endpoints: latency-svc-2ns5q [1.290624476s] May 10 22:17:38.228: INFO: Created: latency-svc-h4bmh May 10 22:17:38.260: INFO: Got endpoints: latency-svc-h4bmh [1.377156978s] May 10 22:17:38.313: INFO: Created: latency-svc-5bm5z May 10 22:17:38.372: INFO: Got endpoints: latency-svc-5bm5z [1.368922968s] May 10 22:17:38.381: INFO: Created: latency-svc-8kxbf May 10 22:17:38.397: INFO: Got endpoints: latency-svc-8kxbf [1.344292223s] May 10 22:17:38.446: INFO: Created: latency-svc-zgvbx May 10 22:17:38.505: INFO: Got endpoints: latency-svc-zgvbx [1.409681341s] May 10 22:17:38.529: INFO: Created: latency-svc-gr42f May 10 22:17:38.554: INFO: Got endpoints: latency-svc-gr42f [1.391234298s] May 10 22:17:38.584: INFO: Created: latency-svc-7hl5m May 10 22:17:38.602: INFO: Got endpoints: latency-svc-7hl5m [1.35721747s] May 10 22:17:38.649: INFO: Created: latency-svc-6ffjx May 10 22:17:38.651: INFO: Got endpoints: latency-svc-6ffjx [1.347378416s] May 10 22:17:38.679: INFO: Created: latency-svc-59jdf May 10 22:17:38.709: INFO: Got endpoints: latency-svc-59jdf [875.702715ms] May 10 22:17:38.741: INFO: Created: latency-svc-p6qtk May 10 22:17:38.784: INFO: Got endpoints: latency-svc-p6qtk [902.600644ms] May 10 22:17:38.799: INFO: Created: latency-svc-8pfpp May 10 22:17:38.815: INFO: Got endpoints: latency-svc-8pfpp [858.802305ms] May 10 22:17:38.835: INFO: Created: latency-svc-bpj6z May 10 22:17:38.851: INFO: Got endpoints: latency-svc-bpj6z [856.922213ms] May 10 22:17:38.921: INFO: Created: latency-svc-rjfsk May 10 22:17:38.941: INFO: Got endpoints: latency-svc-rjfsk [917.869241ms] May 10 22:17:39.067: INFO: Created: latency-svc-chxdl May 10 22:17:39.079: INFO: Got endpoints: latency-svc-chxdl [1.003312281s] May 10 22:17:39.099: INFO: Created: latency-svc-5cjq8 May 10 22:17:39.115: INFO: Got endpoints: latency-svc-5cjq8 [1.000960352s] May 10 22:17:39.154: INFO: Created: latency-svc-65vzp May 10 22:17:39.165: INFO: Got endpoints: latency-svc-65vzp [1.005682981s] May 10 22:17:39.213: INFO: Created: latency-svc-5ptvx May 10 22:17:39.230: INFO: Got endpoints: latency-svc-5ptvx [970.517063ms] May 10 22:17:39.267: INFO: Created: latency-svc-4hmxx May 10 22:17:39.285: INFO: Got endpoints: latency-svc-4hmxx [913.173087ms] May 10 22:17:39.366: INFO: Created: latency-svc-hhphd May 10 22:17:39.369: INFO: Got endpoints: latency-svc-hhphd [971.476503ms] May 10 22:17:39.435: INFO: Created: latency-svc-tlntj May 10 22:17:39.453: INFO: Got endpoints: latency-svc-tlntj [948.278687ms] May 10 22:17:39.527: INFO: Created: latency-svc-l9j78 May 10 22:17:39.531: INFO: Got endpoints: latency-svc-l9j78 [976.631847ms] May 10 22:17:39.561: INFO: Created: latency-svc-hrd2v May 10 22:17:39.573: INFO: Got endpoints: latency-svc-hrd2v [970.847171ms] May 10 22:17:39.610: INFO: Created: latency-svc-xdqgd May 10 22:17:39.683: INFO: Got endpoints: latency-svc-xdqgd [1.032519355s] May 10 22:17:39.687: INFO: Created: latency-svc-5tjt9 May 10 22:17:39.712: INFO: Got endpoints: latency-svc-5tjt9 [1.002539003s] May 10 22:17:39.771: INFO: Created: latency-svc-hmxwn May 10 22:17:39.869: INFO: Got endpoints: latency-svc-hmxwn [1.084948029s] May 10 22:17:39.904: INFO: Created: latency-svc-9p9zp May 10 22:17:39.916: INFO: Got endpoints: latency-svc-9p9zp [1.101539352s] May 10 22:17:40.049: INFO: Created: latency-svc-h7r5w May 10 22:17:40.059: INFO: Got endpoints: latency-svc-h7r5w [1.208330504s] May 10 22:17:40.084: INFO: Created: latency-svc-bxx97 May 10 22:17:40.107: INFO: Got endpoints: latency-svc-bxx97 [1.165679123s] May 10 22:17:40.144: INFO: Created: latency-svc-94br8 May 10 22:17:40.204: INFO: Got endpoints: latency-svc-94br8 [1.124935893s] May 10 22:17:40.227: INFO: Created: latency-svc-xvl5p May 10 22:17:40.247: INFO: Got endpoints: latency-svc-xvl5p [1.132001297s] May 10 22:17:40.275: INFO: Created: latency-svc-z6dv8 May 10 22:17:40.289: INFO: Got endpoints: latency-svc-z6dv8 [1.12378649s] May 10 22:17:40.372: INFO: Created: latency-svc-9zplf May 10 22:17:40.385: INFO: Got endpoints: latency-svc-9zplf [1.155006566s] May 10 22:17:40.425: INFO: Created: latency-svc-slnlt May 10 22:17:40.445: INFO: Got endpoints: latency-svc-slnlt [1.160287511s] May 10 22:17:40.467: INFO: Created: latency-svc-qlkfw May 10 22:17:40.515: INFO: Got endpoints: latency-svc-qlkfw [1.146043509s] May 10 22:17:40.545: INFO: Created: latency-svc-llrbq May 10 22:17:40.560: INFO: Got endpoints: latency-svc-llrbq [1.106961336s] May 10 22:17:40.594: INFO: Created: latency-svc-l44sw May 10 22:17:40.678: INFO: Got endpoints: latency-svc-l44sw [1.146862018s] May 10 22:17:40.706: INFO: Created: latency-svc-fxkb9 May 10 22:17:40.722: INFO: Got endpoints: latency-svc-fxkb9 [1.149352071s] May 10 22:17:40.815: INFO: Created: latency-svc-9kh4q May 10 22:17:40.818: INFO: Got endpoints: latency-svc-9kh4q [1.134483301s] May 10 22:17:40.845: INFO: Created: latency-svc-v4dx7 May 10 22:17:40.855: INFO: Got endpoints: latency-svc-v4dx7 [1.14304282s] May 10 22:17:40.881: INFO: Created: latency-svc-wcjdv May 10 22:17:40.891: INFO: Got endpoints: latency-svc-wcjdv [1.021869433s] May 10 22:17:40.976: INFO: Created: latency-svc-csc2d May 10 22:17:40.980: INFO: Got endpoints: latency-svc-csc2d [1.063634125s] May 10 22:17:41.043: INFO: Created: latency-svc-hs9g2 May 10 22:17:41.071: INFO: Got endpoints: latency-svc-hs9g2 [1.011954072s] May 10 22:17:41.109: INFO: Created: latency-svc-nf8jv May 10 22:17:41.132: INFO: Got endpoints: latency-svc-nf8jv [1.024954147s] May 10 22:17:41.169: INFO: Created: latency-svc-k6s7z May 10 22:17:41.186: INFO: Got endpoints: latency-svc-k6s7z [981.720253ms] May 10 22:17:41.283: INFO: Created: latency-svc-lgg56 May 10 22:17:41.300: INFO: Got endpoints: latency-svc-lgg56 [1.052441461s] May 10 22:17:41.481: INFO: Created: latency-svc-svc5n May 10 22:17:41.516: INFO: Got endpoints: latency-svc-svc5n [1.226570656s] May 10 22:17:41.593: INFO: Created: latency-svc-65v54 May 10 22:17:41.601: INFO: Got endpoints: latency-svc-65v54 [1.215616059s] May 10 22:17:41.637: INFO: Created: latency-svc-kj44j May 10 22:17:41.655: INFO: Got endpoints: latency-svc-kj44j [1.209330104s] May 10 22:17:41.684: INFO: Created: latency-svc-mg5sl May 10 22:17:41.737: INFO: Got endpoints: latency-svc-mg5sl [1.222059843s] May 10 22:17:41.768: INFO: Created: latency-svc-2mg8l May 10 22:17:41.781: INFO: Got endpoints: latency-svc-2mg8l [1.220863365s] May 10 22:17:41.805: INFO: Created: latency-svc-csncm May 10 22:17:41.817: INFO: Got endpoints: latency-svc-csncm [1.13958671s] May 10 22:17:41.892: INFO: Created: latency-svc-nj7vt May 10 22:17:41.901: INFO: Got endpoints: latency-svc-nj7vt [1.178476631s] May 10 22:17:41.924: INFO: Created: latency-svc-828sf May 10 22:17:41.944: INFO: Got endpoints: latency-svc-828sf [1.125878734s] May 10 22:17:41.966: INFO: Created: latency-svc-4wtch May 10 22:17:41.979: INFO: Got endpoints: latency-svc-4wtch [1.124544409s] May 10 22:17:42.037: INFO: Created: latency-svc-jzcrg May 10 22:17:42.046: INFO: Got endpoints: latency-svc-jzcrg [1.155100071s] May 10 22:17:42.070: INFO: Created: latency-svc-mvc5x May 10 22:17:42.088: INFO: Got endpoints: latency-svc-mvc5x [1.108269704s] May 10 22:17:42.111: INFO: Created: latency-svc-9mn98 May 10 22:17:42.124: INFO: Got endpoints: latency-svc-9mn98 [1.052875109s] May 10 22:17:42.228: INFO: Created: latency-svc-fcwfc May 10 22:17:42.231: INFO: Got endpoints: latency-svc-fcwfc [1.099020769s] May 10 22:17:42.288: INFO: Created: latency-svc-l9v5p May 10 22:17:42.303: INFO: Got endpoints: latency-svc-l9v5p [1.116618328s] May 10 22:17:42.396: INFO: Created: latency-svc-sl9cw May 10 22:17:42.399: INFO: Got endpoints: latency-svc-sl9cw [1.099179453s] May 10 22:17:42.495: INFO: Created: latency-svc-m5jbh May 10 22:17:42.533: INFO: Got endpoints: latency-svc-m5jbh [1.017526783s] May 10 22:17:42.548: INFO: Created: latency-svc-4zwnq May 10 22:17:42.563: INFO: Got endpoints: latency-svc-4zwnq [962.026562ms] May 10 22:17:42.591: INFO: Created: latency-svc-k4gz2 May 10 22:17:42.606: INFO: Got endpoints: latency-svc-k4gz2 [951.458742ms] May 10 22:17:42.694: INFO: Created: latency-svc-wt6n7 May 10 22:17:42.697: INFO: Got endpoints: latency-svc-wt6n7 [959.997419ms] May 10 22:17:42.728: INFO: Created: latency-svc-swmx8 May 10 22:17:42.744: INFO: Got endpoints: latency-svc-swmx8 [963.42101ms] May 10 22:17:42.764: INFO: Created: latency-svc-qpnkz May 10 22:17:42.781: INFO: Got endpoints: latency-svc-qpnkz [963.659955ms] May 10 22:17:42.833: INFO: Created: latency-svc-r46b9 May 10 22:17:42.836: INFO: Got endpoints: latency-svc-r46b9 [935.080091ms] May 10 22:17:42.878: INFO: Created: latency-svc-z2knp May 10 22:17:42.889: INFO: Got endpoints: latency-svc-z2knp [945.664906ms] May 10 22:17:42.926: INFO: Created: latency-svc-vvwm7 May 10 22:17:42.994: INFO: Got endpoints: latency-svc-vvwm7 [1.014471801s] May 10 22:17:42.996: INFO: Created: latency-svc-qsxrr May 10 22:17:43.015: INFO: Got endpoints: latency-svc-qsxrr [969.232593ms] May 10 22:17:43.040: INFO: Created: latency-svc-9czcq May 10 22:17:43.070: INFO: Got endpoints: latency-svc-9czcq [981.982255ms] May 10 22:17:43.168: INFO: Created: latency-svc-4ftpw May 10 22:17:43.171: INFO: Got endpoints: latency-svc-4ftpw [1.046982621s] May 10 22:17:43.202: INFO: Created: latency-svc-686zl May 10 22:17:43.221: INFO: Got endpoints: latency-svc-686zl [989.676348ms] May 10 22:17:43.238: INFO: Created: latency-svc-bvnh6 May 10 22:17:43.256: INFO: Got endpoints: latency-svc-bvnh6 [953.336877ms] May 10 22:17:43.324: INFO: Created: latency-svc-q2wxw May 10 22:17:43.331: INFO: Got endpoints: latency-svc-q2wxw [931.291001ms] May 10 22:17:43.364: INFO: Created: latency-svc-7h2dt May 10 22:17:43.377: INFO: Got endpoints: latency-svc-7h2dt [843.904173ms] May 10 22:17:43.474: INFO: Created: latency-svc-khdfl May 10 22:17:43.520: INFO: Created: latency-svc-8snn6 May 10 22:17:43.520: INFO: Got endpoints: latency-svc-khdfl [956.728633ms] May 10 22:17:43.562: INFO: Got endpoints: latency-svc-8snn6 [955.89939ms] May 10 22:17:43.610: INFO: Created: latency-svc-2jbg9 May 10 22:17:43.623: INFO: Got endpoints: latency-svc-2jbg9 [925.967012ms] May 10 22:17:43.646: INFO: Created: latency-svc-j5lqb May 10 22:17:43.683: INFO: Got endpoints: latency-svc-j5lqb [939.01656ms] May 10 22:17:43.749: INFO: Created: latency-svc-snchs May 10 22:17:43.756: INFO: Got endpoints: latency-svc-snchs [974.727319ms] May 10 22:17:43.778: INFO: Created: latency-svc-2xpzv May 10 22:17:43.793: INFO: Got endpoints: latency-svc-2xpzv [956.662738ms] May 10 22:17:43.826: INFO: Created: latency-svc-6bsm5 May 10 22:17:43.835: INFO: Got endpoints: latency-svc-6bsm5 [945.329596ms] May 10 22:17:43.911: INFO: Created: latency-svc-mz8fc May 10 22:17:43.913: INFO: Got endpoints: latency-svc-mz8fc [919.22551ms] May 10 22:17:43.945: INFO: Created: latency-svc-66kmr May 10 22:17:43.973: INFO: Got endpoints: latency-svc-66kmr [958.054126ms] May 10 22:17:44.000: INFO: Created: latency-svc-f6bsr May 10 22:17:44.060: INFO: Got endpoints: latency-svc-f6bsr [989.530754ms] May 10 22:17:44.061: INFO: Created: latency-svc-mqxpz May 10 22:17:44.069: INFO: Got endpoints: latency-svc-mqxpz [897.689687ms] May 10 22:17:44.090: INFO: Created: latency-svc-fbll8 May 10 22:17:44.106: INFO: Got endpoints: latency-svc-fbll8 [885.472774ms] May 10 22:17:44.132: INFO: Created: latency-svc-4qbw4 May 10 22:17:44.142: INFO: Got endpoints: latency-svc-4qbw4 [885.578414ms] May 10 22:17:44.210: INFO: Created: latency-svc-w6cc7 May 10 22:17:44.214: INFO: Got endpoints: latency-svc-w6cc7 [883.377013ms] May 10 22:17:44.245: INFO: Created: latency-svc-pp94x May 10 22:17:44.276: INFO: Got endpoints: latency-svc-pp94x [898.975787ms] May 10 22:17:44.354: INFO: Created: latency-svc-jzldp May 10 22:17:44.357: INFO: Got endpoints: latency-svc-jzldp [837.28548ms] May 10 22:17:44.408: INFO: Created: latency-svc-j7jb7 May 10 22:17:44.419: INFO: Got endpoints: latency-svc-j7jb7 [856.931001ms] May 10 22:17:44.443: INFO: Created: latency-svc-bvrct May 10 22:17:44.485: INFO: Got endpoints: latency-svc-bvrct [861.80921ms] May 10 22:17:44.511: INFO: Created: latency-svc-lp9pf May 10 22:17:44.527: INFO: Got endpoints: latency-svc-lp9pf [843.814103ms] May 10 22:17:44.629: INFO: Created: latency-svc-xwjrz May 10 22:17:44.633: INFO: Got endpoints: latency-svc-xwjrz [876.838892ms] May 10 22:17:44.659: INFO: Created: latency-svc-ql2p7 May 10 22:17:44.672: INFO: Got endpoints: latency-svc-ql2p7 [878.627846ms] May 10 22:17:44.696: INFO: Created: latency-svc-hrbml May 10 22:17:44.708: INFO: Got endpoints: latency-svc-hrbml [873.108542ms] May 10 22:17:44.779: INFO: Created: latency-svc-7lp59 May 10 22:17:44.782: INFO: Got endpoints: latency-svc-7lp59 [868.792034ms] May 10 22:17:44.815: INFO: Created: latency-svc-7pncp May 10 22:17:44.829: INFO: Got endpoints: latency-svc-7pncp [855.137817ms] May 10 22:17:44.851: INFO: Created: latency-svc-vw72q May 10 22:17:44.865: INFO: Got endpoints: latency-svc-vw72q [805.132819ms] May 10 22:17:44.928: INFO: Created: latency-svc-rj4xm May 10 22:17:44.931: INFO: Got endpoints: latency-svc-rj4xm [862.224917ms] May 10 22:17:44.990: INFO: Created: latency-svc-cppgc May 10 22:17:45.010: INFO: Got endpoints: latency-svc-cppgc [903.545207ms] May 10 22:17:45.072: INFO: Created: latency-svc-92hjb May 10 22:17:45.077: INFO: Got endpoints: latency-svc-92hjb [935.253135ms] May 10 22:17:45.115: INFO: Created: latency-svc-mnr6j May 10 22:17:45.124: INFO: Got endpoints: latency-svc-mnr6j [910.096789ms] May 10 22:17:45.145: INFO: Created: latency-svc-wqlhb May 10 22:17:45.258: INFO: Got endpoints: latency-svc-wqlhb [981.320828ms] May 10 22:17:45.260: INFO: Created: latency-svc-g646s May 10 22:17:45.269: INFO: Got endpoints: latency-svc-g646s [911.798755ms] May 10 22:17:45.301: INFO: Created: latency-svc-lndjd May 10 22:17:45.317: INFO: Got endpoints: latency-svc-lndjd [897.450369ms] May 10 22:17:45.339: INFO: Created: latency-svc-ns6bm May 10 22:17:45.353: INFO: Got endpoints: latency-svc-ns6bm [868.339592ms] May 10 22:17:45.413: INFO: Created: latency-svc-85zht May 10 22:17:45.416: INFO: Got endpoints: latency-svc-85zht [888.913495ms] May 10 22:17:45.475: INFO: Created: latency-svc-z2w2v May 10 22:17:45.497: INFO: Got endpoints: latency-svc-z2w2v [864.593655ms] May 10 22:17:45.569: INFO: Created: latency-svc-vwpjc May 10 22:17:45.575: INFO: Got endpoints: latency-svc-vwpjc [903.597518ms] May 10 22:17:45.602: INFO: Created: latency-svc-7g8rn May 10 22:17:45.618: INFO: Got endpoints: latency-svc-7g8rn [909.746296ms] May 10 22:17:45.643: INFO: Created: latency-svc-phfc6 May 10 22:17:45.654: INFO: Got endpoints: latency-svc-phfc6 [872.026215ms] May 10 22:17:45.743: INFO: Created: latency-svc-sw4vx May 10 22:17:45.747: INFO: Got endpoints: latency-svc-sw4vx [917.979584ms] May 10 22:17:45.787: INFO: Created: latency-svc-8z5cd May 10 22:17:45.805: INFO: Got endpoints: latency-svc-8z5cd [939.721797ms] May 10 22:17:45.911: INFO: Created: latency-svc-llsmq May 10 22:17:45.914: INFO: Got endpoints: latency-svc-llsmq [983.005954ms] May 10 22:17:45.979: INFO: Created: latency-svc-n48cc May 10 22:17:46.008: INFO: Got endpoints: latency-svc-n48cc [998.102718ms] May 10 22:17:46.061: INFO: Created: latency-svc-rqp2s May 10 22:17:46.093: INFO: Got endpoints: latency-svc-rqp2s [1.015847658s] May 10 22:17:46.093: INFO: Created: latency-svc-9wn6m May 10 22:17:46.106: INFO: Got endpoints: latency-svc-9wn6m [981.419273ms] May 10 22:17:46.129: INFO: Created: latency-svc-kz49c May 10 22:17:46.142: INFO: Got endpoints: latency-svc-kz49c [883.941946ms] May 10 22:17:46.204: INFO: Created: latency-svc-jkltx May 10 22:17:46.207: INFO: Got endpoints: latency-svc-jkltx [937.8522ms] May 10 22:17:46.261: INFO: Created: latency-svc-ts4tp May 10 22:17:46.291: INFO: Got endpoints: latency-svc-ts4tp [974.085636ms] May 10 22:17:46.366: INFO: Created: latency-svc-2n2fn May 10 22:17:46.368: INFO: Got endpoints: latency-svc-2n2fn [1.015029476s] May 10 22:17:46.399: INFO: Created: latency-svc-zccf2 May 10 22:17:46.412: INFO: Got endpoints: latency-svc-zccf2 [996.188344ms] May 10 22:17:46.435: INFO: Created: latency-svc-ncxrr May 10 22:17:46.449: INFO: Got endpoints: latency-svc-ncxrr [951.887289ms] May 10 22:17:46.515: INFO: Created: latency-svc-hplxp May 10 22:17:46.560: INFO: Got endpoints: latency-svc-hplxp [985.213969ms] May 10 22:17:46.561: INFO: Created: latency-svc-kk8hl May 10 22:17:46.585: INFO: Got endpoints: latency-svc-kk8hl [966.931955ms] May 10 22:17:46.678: INFO: Created: latency-svc-gxdcj May 10 22:17:46.697: INFO: Got endpoints: latency-svc-gxdcj [1.04239531s] May 10 22:17:46.717: INFO: Created: latency-svc-twnn6 May 10 22:17:46.732: INFO: Got endpoints: latency-svc-twnn6 [984.784558ms] May 10 22:17:46.759: INFO: Created: latency-svc-6nz59 May 10 22:17:46.774: INFO: Got endpoints: latency-svc-6nz59 [969.032019ms] May 10 22:17:46.821: INFO: Created: latency-svc-v9t6p May 10 22:17:46.828: INFO: Got endpoints: latency-svc-v9t6p [913.82254ms] May 10 22:17:46.849: INFO: Created: latency-svc-twnqq May 10 22:17:46.865: INFO: Got endpoints: latency-svc-twnqq [856.247369ms] May 10 22:17:46.891: INFO: Created: latency-svc-nbkjt May 10 22:17:46.901: INFO: Got endpoints: latency-svc-nbkjt [807.994828ms] May 10 22:17:46.970: INFO: Created: latency-svc-rd5gs May 10 22:17:46.980: INFO: Got endpoints: latency-svc-rd5gs [874.254568ms] May 10 22:17:47.028: INFO: Created: latency-svc-5l248 May 10 22:17:47.046: INFO: Got endpoints: latency-svc-5l248 [903.797917ms] May 10 22:17:47.110: INFO: Created: latency-svc-q7sgq May 10 22:17:47.111: INFO: Got endpoints: latency-svc-q7sgq [904.269364ms] May 10 22:17:47.149: INFO: Created: latency-svc-2cd8d May 10 22:17:47.160: INFO: Got endpoints: latency-svc-2cd8d [869.110758ms] May 10 22:17:47.196: INFO: Created: latency-svc-rrkjg May 10 22:17:47.250: INFO: Got endpoints: latency-svc-rrkjg [881.627015ms] May 10 22:17:47.281: INFO: Created: latency-svc-89whk May 10 22:17:47.301: INFO: Got endpoints: latency-svc-89whk [888.826983ms] May 10 22:17:47.329: INFO: Created: latency-svc-fmljd May 10 22:17:47.371: INFO: Got endpoints: latency-svc-fmljd [922.222374ms] May 10 22:17:47.382: INFO: Created: latency-svc-tvgkz May 10 22:17:47.401: INFO: Got endpoints: latency-svc-tvgkz [840.526049ms] May 10 22:17:47.454: INFO: Created: latency-svc-7vwfz May 10 22:17:47.503: INFO: Got endpoints: latency-svc-7vwfz [918.243062ms] May 10 22:17:47.520: INFO: Created: latency-svc-ghbxk May 10 22:17:47.540: INFO: Got endpoints: latency-svc-ghbxk [842.911191ms] May 10 22:17:47.574: INFO: Created: latency-svc-4j6gk May 10 22:17:47.588: INFO: Got endpoints: latency-svc-4j6gk [856.024387ms] May 10 22:17:47.635: INFO: Created: latency-svc-ltjkp May 10 22:17:47.638: INFO: Got endpoints: latency-svc-ltjkp [864.161919ms] May 10 22:17:47.670: INFO: Created: latency-svc-lgzkt May 10 22:17:47.684: INFO: Got endpoints: latency-svc-lgzkt [855.743878ms] May 10 22:17:47.706: INFO: Created: latency-svc-76p5n May 10 22:17:47.773: INFO: Got endpoints: latency-svc-76p5n [907.994049ms] May 10 22:17:47.784: INFO: Created: latency-svc-jvd6h May 10 22:17:47.799: INFO: Got endpoints: latency-svc-jvd6h [897.682099ms] May 10 22:17:47.820: INFO: Created: latency-svc-cpc5d May 10 22:17:47.835: INFO: Got endpoints: latency-svc-cpc5d [854.973458ms] May 10 22:17:47.856: INFO: Created: latency-svc-gvv9f May 10 22:17:47.871: INFO: Got endpoints: latency-svc-gvv9f [825.705274ms] May 10 22:17:47.949: INFO: Created: latency-svc-lbdkh May 10 22:17:47.951: INFO: Got endpoints: latency-svc-lbdkh [839.271694ms] May 10 22:17:48.000: INFO: Created: latency-svc-wtsjn May 10 22:17:48.016: INFO: Got endpoints: latency-svc-wtsjn [856.109298ms] May 10 22:17:48.042: INFO: Created: latency-svc-7cfln May 10 22:17:48.084: INFO: Got endpoints: latency-svc-7cfln [834.027153ms] May 10 22:17:48.096: INFO: Created: latency-svc-hvd8l May 10 22:17:48.112: INFO: Got endpoints: latency-svc-hvd8l [811.086003ms] May 10 22:17:48.132: INFO: Created: latency-svc-qstsp May 10 22:17:48.149: INFO: Got endpoints: latency-svc-qstsp [777.540504ms] May 10 22:17:48.174: INFO: Created: latency-svc-9dkdh May 10 22:17:48.222: INFO: Got endpoints: latency-svc-9dkdh [820.796226ms] May 10 22:17:48.235: INFO: Created: latency-svc-nlk7l May 10 22:17:48.258: INFO: Got endpoints: latency-svc-nlk7l [754.661754ms] May 10 22:17:48.313: INFO: Created: latency-svc-sgqx6 May 10 22:17:48.359: INFO: Got endpoints: latency-svc-sgqx6 [819.83381ms] May 10 22:17:48.372: INFO: Created: latency-svc-thcs5 May 10 22:17:48.389: INFO: Got endpoints: latency-svc-thcs5 [801.703576ms] May 10 22:17:48.414: INFO: Created: latency-svc-b6dw4 May 10 22:17:48.426: INFO: Got endpoints: latency-svc-b6dw4 [787.327024ms] May 10 22:17:48.450: INFO: Created: latency-svc-r4wff May 10 22:17:48.503: INFO: Got endpoints: latency-svc-r4wff [818.667005ms] May 10 22:17:48.516: INFO: Created: latency-svc-b8c9w May 10 22:17:48.535: INFO: Got endpoints: latency-svc-b8c9w [761.932998ms] May 10 22:17:48.564: INFO: Created: latency-svc-8w65l May 10 22:17:48.653: INFO: Got endpoints: latency-svc-8w65l [854.264409ms] May 10 22:17:48.666: INFO: Created: latency-svc-59cnq May 10 22:17:48.685: INFO: Got endpoints: latency-svc-59cnq [850.265398ms] May 10 22:17:48.714: INFO: Created: latency-svc-6qft2 May 10 22:17:48.727: INFO: Got endpoints: latency-svc-6qft2 [855.824658ms] May 10 22:17:48.749: INFO: Created: latency-svc-4kdg6 May 10 22:17:48.790: INFO: Got endpoints: latency-svc-4kdg6 [839.830073ms] May 10 22:17:48.804: INFO: Created: latency-svc-z9845 May 10 22:17:48.834: INFO: Got endpoints: latency-svc-z9845 [817.88197ms] May 10 22:17:48.864: INFO: Created: latency-svc-67htp May 10 22:17:48.881: INFO: Got endpoints: latency-svc-67htp [797.175517ms] May 10 22:17:48.928: INFO: Created: latency-svc-4c6m4 May 10 22:17:48.931: INFO: Got endpoints: latency-svc-4c6m4 [818.63654ms] May 10 22:17:48.966: INFO: Created: latency-svc-cwskp May 10 22:17:48.984: INFO: Got endpoints: latency-svc-cwskp [835.025306ms] May 10 22:17:49.001: INFO: Created: latency-svc-cc9zg May 10 22:17:49.026: INFO: Got endpoints: latency-svc-cc9zg [803.891221ms] May 10 22:17:49.102: INFO: Created: latency-svc-c7mz6 May 10 22:17:49.104: INFO: Got endpoints: latency-svc-c7mz6 [846.502946ms] May 10 22:17:49.146: INFO: Created: latency-svc-vpknd May 10 22:17:49.158: INFO: Got endpoints: latency-svc-vpknd [799.044831ms] May 10 22:17:49.184: INFO: Created: latency-svc-ml7p8 May 10 22:17:49.195: INFO: Got endpoints: latency-svc-ml7p8 [805.138004ms] May 10 22:17:49.246: INFO: Created: latency-svc-8924r May 10 22:17:49.278: INFO: Got endpoints: latency-svc-8924r [852.489774ms] May 10 22:17:49.314: INFO: Created: latency-svc-v5p4c May 10 22:17:49.327: INFO: Got endpoints: latency-svc-v5p4c [824.450909ms] May 10 22:17:49.383: INFO: Created: latency-svc-q582n May 10 22:17:49.386: INFO: Got endpoints: latency-svc-q582n [851.211614ms] May 10 22:17:49.470: INFO: Created: latency-svc-w82cc May 10 22:17:49.516: INFO: Got endpoints: latency-svc-w82cc [862.699496ms] May 10 22:17:49.532: INFO: Created: latency-svc-wfxb7 May 10 22:17:49.550: INFO: Got endpoints: latency-svc-wfxb7 [864.904174ms] May 10 22:17:49.596: INFO: Created: latency-svc-jphz4 May 10 22:17:49.648: INFO: Got endpoints: latency-svc-jphz4 [920.506781ms] May 10 22:17:49.656: INFO: Created: latency-svc-wx5gh May 10 22:17:49.670: INFO: Got endpoints: latency-svc-wx5gh [879.787636ms] May 10 22:17:49.691: INFO: Created: latency-svc-nxqdk May 10 22:17:49.700: INFO: Got endpoints: latency-svc-nxqdk [866.256692ms] May 10 22:17:49.700: INFO: Latencies: [47.723797ms 85.880605ms 160.54612ms 174.065721ms 294.135261ms 344.030994ms 386.411635ms 454.131431ms 536.376824ms 594.483992ms 754.661754ms 761.932998ms 777.540504ms 787.327024ms 797.175517ms 799.044831ms 801.703576ms 803.891221ms 805.132819ms 805.138004ms 807.994828ms 811.086003ms 817.88197ms 818.63654ms 818.667005ms 819.83381ms 820.796226ms 824.450909ms 825.705274ms 834.027153ms 835.025306ms 837.28548ms 839.271694ms 839.830073ms 840.526049ms 842.911191ms 843.814103ms 843.904173ms 846.502946ms 850.265398ms 851.211614ms 852.489774ms 854.264409ms 854.973458ms 855.137817ms 855.743878ms 855.824658ms 856.024387ms 856.109298ms 856.247369ms 856.922213ms 856.931001ms 858.802305ms 861.80921ms 862.224917ms 862.699496ms 864.161919ms 864.593655ms 864.904174ms 866.256692ms 868.339592ms 868.792034ms 869.110758ms 872.026215ms 873.108542ms 874.254568ms 875.702715ms 876.838892ms 878.627846ms 879.787636ms 881.627015ms 883.377013ms 883.941946ms 885.472774ms 885.578414ms 888.826983ms 888.913495ms 897.450369ms 897.682099ms 897.689687ms 898.975787ms 902.600644ms 903.545207ms 903.597518ms 903.797917ms 904.269364ms 907.994049ms 909.746296ms 910.096789ms 911.798755ms 913.173087ms 913.82254ms 917.869241ms 917.979584ms 918.243062ms 919.22551ms 920.506781ms 922.222374ms 925.967012ms 931.291001ms 935.080091ms 935.253135ms 937.8522ms 939.01656ms 939.721797ms 945.329596ms 945.664906ms 948.278687ms 951.458742ms 951.887289ms 953.336877ms 955.89939ms 956.662738ms 956.728633ms 958.054126ms 959.997419ms 962.026562ms 963.42101ms 963.659955ms 966.931955ms 969.032019ms 969.232593ms 970.517063ms 970.847171ms 971.476503ms 974.085636ms 974.727319ms 976.631847ms 981.320828ms 981.419273ms 981.720253ms 981.982255ms 983.005954ms 984.784558ms 985.213969ms 989.530754ms 989.676348ms 996.188344ms 998.102718ms 1.000960352s 1.002539003s 1.003312281s 1.005682981s 1.011954072s 1.014471801s 1.015029476s 1.015847658s 1.017526783s 1.021869433s 1.024954147s 1.032519355s 1.04239531s 1.046982621s 1.052441461s 1.052875109s 1.063634125s 1.084948029s 1.099020769s 1.099179453s 1.101539352s 1.106961336s 1.108269704s 1.116618328s 1.12378649s 1.124544409s 1.124802942s 1.124935893s 1.125878734s 1.132001297s 1.134483301s 1.13958671s 1.14304282s 1.146043509s 1.146862018s 1.149352071s 1.155006566s 1.155100071s 1.160287511s 1.165679123s 1.172621043s 1.178476631s 1.208330504s 1.209330104s 1.215616059s 1.220863365s 1.222059843s 1.226570656s 1.246916056s 1.284968564s 1.290624476s 1.31510173s 1.318993603s 1.320177985s 1.344292223s 1.347378416s 1.35721747s 1.368922968s 1.377156978s 1.391234298s 1.409681341s] May 10 22:17:49.701: INFO: 50 %ile: 935.080091ms May 10 22:17:49.701: INFO: 90 %ile: 1.178476631s May 10 22:17:49.701: INFO: 99 %ile: 1.391234298s May 10 22:17:49.701: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9134" for this suite. • [SLOW TEST:16.471 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":243,"skipped":3898,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:49.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:53.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-450" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:53.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 10 22:17:53.925: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:54.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1357" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":245,"skipped":4031,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:54.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 10 22:17:54.080: INFO: Waiting up to 5m0s for pod "var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f" in namespace "var-expansion-4885" to be "success or failure" May 10 22:17:54.110: INFO: Pod "var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.293895ms May 10 22:17:56.216: INFO: Pod "var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136011369s May 10 22:17:58.285: INFO: Pod "var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205266853s STEP: Saw pod success May 10 22:17:58.285: INFO: Pod "var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f" satisfied condition "success or failure" May 10 22:17:58.308: INFO: Trying to get logs from node jerma-worker pod var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f container dapi-container: STEP: delete the pod May 10 22:17:58.398: INFO: Waiting for pod var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f to disappear May 10 22:17:58.417: INFO: Pod var-expansion-dde24a6f-c735-4003-b2e6-3abbdc18875f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:17:58.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4885" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4041,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:17:58.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0510 22:18:10.867262 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 10 22:18:10.867: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:18:10.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4370" for this suite. • [SLOW TEST:12.887 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":247,"skipped":4079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:18:11.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 22:18:11.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3" in namespace "projected-9666" to be "success or failure" May 10 22:18:11.786: INFO: Pod "downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 68.054339ms May 10 22:18:13.902: INFO: Pod "downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184855412s May 10 22:18:15.915: INFO: Pod "downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19777952s STEP: Saw pod success May 10 22:18:15.915: INFO: Pod "downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3" satisfied condition "success or failure" May 10 22:18:15.918: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3 container client-container: STEP: delete the pod May 10 22:18:16.290: INFO: Waiting for pod downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3 to disappear May 10 22:18:16.300: INFO: Pod downwardapi-volume-4c476190-4fd0-4c5e-9b5a-391d83a82fb3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:18:16.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9666" for this suite. • [SLOW TEST:5.023 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4126,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:18:16.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0510 22:18:46.602844 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 10 22:18:46.602: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:18:46.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9636" for this suite. • [SLOW TEST:30.260 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":249,"skipped":4132,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:18:46.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 10 22:18:52.020: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d39a6b38-e93c-4113-adab-6b4f28a2e2c9" May 10 22:18:52.020: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d39a6b38-e93c-4113-adab-6b4f28a2e2c9" in namespace "pods-7496" to be "terminated due to deadline exceeded" May 10 22:18:52.038: INFO: Pod "pod-update-activedeadlineseconds-d39a6b38-e93c-4113-adab-6b4f28a2e2c9": Phase="Running", Reason="", readiness=true. Elapsed: 17.68581ms May 10 22:18:54.079: INFO: Pod "pod-update-activedeadlineseconds-d39a6b38-e93c-4113-adab-6b4f28a2e2c9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.059065384s May 10 22:18:54.079: INFO: Pod "pod-update-activedeadlineseconds-d39a6b38-e93c-4113-adab-6b4f28a2e2c9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:18:54.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7496" for this suite. • [SLOW TEST:7.480 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4134,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:18:54.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-wtmg STEP: Creating a pod to test atomic-volume-subpath May 10 22:18:54.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wtmg" in namespace "subpath-6969" to be "success or failure" May 10 22:18:54.288: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Pending", Reason="", readiness=false. Elapsed: 15.313874ms May 10 22:18:56.304: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031190216s May 10 22:18:58.316: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043362789s May 10 22:19:00.343: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 6.070298374s May 10 22:19:02.349: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 8.076515676s May 10 22:19:04.352: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 10.079528792s May 10 22:19:06.355: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 12.083034402s May 10 22:19:08.359: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 14.086431364s May 10 22:19:10.362: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 16.089153318s May 10 22:19:12.370: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 18.097243116s May 10 22:19:14.374: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 20.101647751s May 10 22:19:16.378: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 22.105744624s May 10 22:19:18.382: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Running", Reason="", readiness=true. Elapsed: 24.109825015s May 10 22:19:20.386: INFO: Pod "pod-subpath-test-configmap-wtmg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.113742523s STEP: Saw pod success May 10 22:19:20.386: INFO: Pod "pod-subpath-test-configmap-wtmg" satisfied condition "success or failure" May 10 22:19:20.388: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-wtmg container test-container-subpath-configmap-wtmg: STEP: delete the pod May 10 22:19:20.407: INFO: Waiting for pod pod-subpath-test-configmap-wtmg to disappear May 10 22:19:20.424: INFO: Pod pod-subpath-test-configmap-wtmg no longer exists STEP: Deleting pod pod-subpath-test-configmap-wtmg May 10 22:19:20.424: INFO: Deleting pod "pod-subpath-test-configmap-wtmg" in namespace "subpath-6969" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:19:20.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6969" for this suite. • [SLOW TEST:26.465 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":251,"skipped":4135,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:19:20.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 10 22:19:20.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-991' May 10 22:19:20.911: INFO: stderr: "" May 10 22:19:20.911: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 10 22:19:20.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-991' May 10 22:19:21.020: INFO: stderr: "" May 10 22:19:21.020: INFO: stdout: "update-demo-nautilus-9wcg9 update-demo-nautilus-l9r2b " May 10 22:19:21.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wcg9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:21.146: INFO: stderr: "" May 10 22:19:21.146: INFO: stdout: "" May 10 22:19:21.146: INFO: update-demo-nautilus-9wcg9 is created but not running May 10 22:19:26.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-991' May 10 22:19:26.908: INFO: stderr: "" May 10 22:19:26.908: INFO: stdout: "update-demo-nautilus-9wcg9 update-demo-nautilus-l9r2b " May 10 22:19:26.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wcg9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:29.394: INFO: stderr: "" May 10 22:19:29.394: INFO: stdout: "true" May 10 22:19:29.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wcg9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:29.509: INFO: stderr: "" May 10 22:19:29.509: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:19:29.509: INFO: validating pod update-demo-nautilus-9wcg9 May 10 22:19:29.529: INFO: got data: { "image": "nautilus.jpg" } May 10 22:19:29.529: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:19:29.529: INFO: update-demo-nautilus-9wcg9 is verified up and running May 10 22:19:29.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9r2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:29.641: INFO: stderr: "" May 10 22:19:29.641: INFO: stdout: "true" May 10 22:19:29.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9r2b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:29.746: INFO: stderr: "" May 10 22:19:29.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 10 22:19:29.746: INFO: validating pod update-demo-nautilus-l9r2b May 10 22:19:29.750: INFO: got data: { "image": "nautilus.jpg" } May 10 22:19:29.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 10 22:19:29.750: INFO: update-demo-nautilus-l9r2b is verified up and running STEP: rolling-update to new replication controller May 10 22:19:29.753: INFO: scanned /root for discovery docs: May 10 22:19:29.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-991' May 10 22:19:52.527: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 10 22:19:52.527: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 10 22:19:52.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-991' May 10 22:19:52.637: INFO: stderr: "" May 10 22:19:52.637: INFO: stdout: "update-demo-kitten-8ks58 update-demo-kitten-g728m " May 10 22:19:52.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8ks58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:52.733: INFO: stderr: "" May 10 22:19:52.733: INFO: stdout: "true" May 10 22:19:52.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8ks58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:52.826: INFO: stderr: "" May 10 22:19:52.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 10 22:19:52.826: INFO: validating pod update-demo-kitten-8ks58 May 10 22:19:52.830: INFO: got data: { "image": "kitten.jpg" } May 10 22:19:52.830: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 10 22:19:52.830: INFO: update-demo-kitten-8ks58 is verified up and running May 10 22:19:52.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g728m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:52.919: INFO: stderr: "" May 10 22:19:52.919: INFO: stdout: "true" May 10 22:19:52.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g728m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-991' May 10 22:19:53.023: INFO: stderr: "" May 10 22:19:53.023: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 10 22:19:53.023: INFO: validating pod update-demo-kitten-g728m May 10 22:19:53.027: INFO: got data: { "image": "kitten.jpg" } May 10 22:19:53.027: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 10 22:19:53.027: INFO: update-demo-kitten-g728m is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:19:53.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-991" for this suite. • [SLOW TEST:32.478 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":252,"skipped":4146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:19:53.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 10 22:19:53.087: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:20:02.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3565" for this suite. • [SLOW TEST:9.383 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":253,"skipped":4169,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:20:02.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 10 22:20:02.968: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:20:14.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5594" for this suite. • [SLOW TEST:11.693 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":254,"skipped":4182,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:20:14.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 10 22:20:14.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314" in namespace "downward-api-8140" to be "success or failure" May 10 22:20:14.380: INFO: Pod "downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314": Phase="Pending", Reason="", readiness=false. Elapsed: 38.475697ms May 10 22:20:16.385: INFO: Pod "downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043224959s May 10 22:20:18.389: INFO: Pod "downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0472687s STEP: Saw pod success May 10 22:20:18.389: INFO: Pod "downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314" satisfied condition "success or failure" May 10 22:20:18.392: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314 container client-container: STEP: delete the pod May 10 22:20:18.539: INFO: Waiting for pod downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314 to disappear May 10 22:20:18.625: INFO: Pod downwardapi-volume-7ecff90c-2967-4dc8-bb38-f51a79cbd314 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:20:18.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8140" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4191,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:20:18.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-aff2a207-e137-4bbc-9dba-41e9ba4d1ebc in namespace container-probe-8606 May 10 22:20:24.695: INFO: Started pod busybox-aff2a207-e137-4bbc-9dba-41e9ba4d1ebc in namespace container-probe-8606 STEP: checking the pod's current state and verifying that restartCount is present May 10 22:20:24.698: INFO: Initial restart count of pod busybox-aff2a207-e137-4bbc-9dba-41e9ba4d1ebc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:24:25.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8606" for this suite. • [SLOW TEST:247.074 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4198,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:24:25.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 10 22:24:34.058: INFO: 4 pods remaining May 10 22:24:34.058: INFO: 0 pods has nil DeletionTimestamp May 10 22:24:34.058: INFO: May 10 22:24:34.863: INFO: 0 pods remaining May 10 22:24:34.863: INFO: 0 pods has nil DeletionTimestamp May 10 22:24:34.863: INFO: May 10 22:24:35.790: INFO: 0 pods remaining May 10 22:24:35.790: INFO: 0 pods has nil DeletionTimestamp May 10 22:24:35.790: INFO: STEP: Gathering metrics W0510 22:24:37.298460 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 10 22:24:37.298: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:24:37.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5473" for this suite. • [SLOW TEST:12.292 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":257,"skipped":4211,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:24:38.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:24:38.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8506" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":258,"skipped":4217,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:24:38.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 10 22:24:39.004: INFO: Waiting up to 5m0s for pod "pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f" in namespace "emptydir-8569" to be "success or failure" May 10 22:24:39.036: INFO: Pod "pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.856718ms May 10 22:24:41.078: INFO: Pod "pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07465437s May 10 22:24:43.270: INFO: Pod "pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.266696343s STEP: Saw pod success May 10 22:24:43.270: INFO: Pod "pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f" satisfied condition "success or failure" May 10 22:24:43.277: INFO: Trying to get logs from node jerma-worker pod pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f container test-container: STEP: delete the pod May 10 22:24:43.464: INFO: Waiting for pod pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f to disappear May 10 22:24:43.503: INFO: Pod pod-58ee81f0-98e4-4632-affc-0bc4463b3b6f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:24:43.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8569" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4217,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:24:43.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 10 22:24:43.613: INFO: Waiting up to 5m0s for pod "pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34" in namespace "emptydir-1921" to be "success or failure" May 10 22:24:43.616: INFO: Pod "pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744099ms May 10 22:24:45.893: INFO: Pod "pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279116604s May 10 22:24:47.896: INFO: Pod "pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282742011s May 10 22:24:49.901: INFO: Pod "pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287362885s STEP: Saw pod success May 10 22:24:49.901: INFO: Pod "pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34" satisfied condition "success or failure" May 10 22:24:49.904: INFO: Trying to get logs from node jerma-worker2 pod pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34 container test-container: STEP: delete the pod May 10 22:24:49.974: INFO: Waiting for pod pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34 to disappear May 10 22:24:49.994: INFO: Pod pod-206d7cbc-6eac-4889-b43c-2f4876ae1a34 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:24:49.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1921" for this suite. • [SLOW TEST:6.492 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4217,"failed":0} S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:24:50.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:24:50.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9848" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":261,"skipped":4218,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:24:50.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:25:14.187: INFO: Container started at 2020-05-10 22:24:52 +0000 UTC, pod became ready at 2020-05-10 22:25:13 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:25:14.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5040" for this suite. • [SLOW TEST:24.080 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:25:14.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:25:14.278: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:25:14.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9560" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":263,"skipped":4266,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:25:14.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6849 STEP: creating a selector STEP: Creating the service pods in kubernetes May 10 22:25:15.080: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 10 22:25:41.166: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.128 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6849 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 22:25:41.166: INFO: >>> kubeConfig: /root/.kube/config I0510 22:25:41.196765 6 log.go:172] (0xc0028fe790) (0xc0026a3f40) Create stream I0510 22:25:41.196799 6 log.go:172] (0xc0028fe790) (0xc0026a3f40) Stream added, broadcasting: 1 I0510 22:25:41.198805 6 log.go:172] (0xc0028fe790) Reply frame received for 1 I0510 22:25:41.198841 6 log.go:172] (0xc0028fe790) (0xc0015aa000) Create stream I0510 22:25:41.198852 6 log.go:172] (0xc0028fe790) (0xc0015aa000) Stream added, broadcasting: 3 I0510 22:25:41.199820 6 log.go:172] (0xc0028fe790) Reply frame received for 3 I0510 22:25:41.199861 6 log.go:172] (0xc0028fe790) (0xc0015dac80) Create stream I0510 22:25:41.199881 6 log.go:172] (0xc0028fe790) (0xc0015dac80) Stream added, broadcasting: 5 I0510 22:25:41.200684 6 log.go:172] (0xc0028fe790) Reply frame received for 5 I0510 22:25:42.254499 6 log.go:172] (0xc0028fe790) Data frame received for 3 I0510 22:25:42.254537 6 log.go:172] (0xc0015aa000) (3) Data frame handling I0510 22:25:42.254560 6 log.go:172] (0xc0015aa000) (3) Data frame sent I0510 22:25:42.254658 6 log.go:172] (0xc0028fe790) Data frame received for 5 I0510 22:25:42.254701 6 log.go:172] (0xc0015dac80) (5) Data frame handling I0510 22:25:42.255176 6 log.go:172] (0xc0028fe790) Data frame received for 3 I0510 22:25:42.255207 6 log.go:172] (0xc0015aa000) (3) Data frame handling I0510 22:25:42.257338 6 log.go:172] (0xc0028fe790) Data frame received for 1 I0510 22:25:42.257360 6 log.go:172] (0xc0026a3f40) (1) Data frame handling I0510 22:25:42.257369 6 log.go:172] (0xc0026a3f40) (1) Data frame sent I0510 22:25:42.257566 6 log.go:172] (0xc0028fe790) (0xc0026a3f40) Stream removed, broadcasting: 1 I0510 22:25:42.257661 6 log.go:172] (0xc0028fe790) Go away received I0510 22:25:42.257715 6 log.go:172] (0xc0028fe790) (0xc0026a3f40) Stream removed, broadcasting: 1 I0510 22:25:42.257758 6 log.go:172] (0xc0028fe790) (0xc0015aa000) Stream removed, broadcasting: 3 I0510 22:25:42.257780 6 log.go:172] (0xc0028fe790) (0xc0015dac80) Stream removed, broadcasting: 5 May 10 22:25:42.257: INFO: Found all expected endpoints: [netserver-0] May 10 22:25:42.271: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.38 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6849 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 10 22:25:42.271: INFO: >>> kubeConfig: /root/.kube/config I0510 22:25:42.299960 6 log.go:172] (0xc001e0c9a0) (0xc001a13ea0) Create stream I0510 22:25:42.299998 6 log.go:172] (0xc001e0c9a0) (0xc001a13ea0) Stream added, broadcasting: 1 I0510 22:25:42.301834 6 log.go:172] (0xc001e0c9a0) Reply frame received for 1 I0510 22:25:42.301876 6 log.go:172] (0xc001e0c9a0) (0xc002320e60) Create stream I0510 22:25:42.301888 6 log.go:172] (0xc001e0c9a0) (0xc002320e60) Stream added, broadcasting: 3 I0510 22:25:42.302763 6 log.go:172] (0xc001e0c9a0) Reply frame received for 3 I0510 22:25:42.302798 6 log.go:172] (0xc001e0c9a0) (0xc0027fe000) Create stream I0510 22:25:42.302809 6 log.go:172] (0xc001e0c9a0) (0xc0027fe000) Stream added, broadcasting: 5 I0510 22:25:42.303540 6 log.go:172] (0xc001e0c9a0) Reply frame received for 5 I0510 22:25:43.359150 6 log.go:172] (0xc001e0c9a0) Data frame received for 3 I0510 22:25:43.359197 6 log.go:172] (0xc002320e60) (3) Data frame handling I0510 22:25:43.359220 6 log.go:172] (0xc002320e60) (3) Data frame sent I0510 22:25:43.359234 6 log.go:172] (0xc001e0c9a0) Data frame received for 3 I0510 22:25:43.359243 6 log.go:172] (0xc002320e60) (3) Data frame handling I0510 22:25:43.360467 6 log.go:172] (0xc001e0c9a0) Data frame received for 5 I0510 22:25:43.360487 6 log.go:172] (0xc0027fe000) (5) Data frame handling I0510 22:25:43.360855 6 log.go:172] (0xc001e0c9a0) Data frame received for 1 I0510 22:25:43.360883 6 log.go:172] (0xc001a13ea0) (1) Data frame handling I0510 22:25:43.360902 6 log.go:172] (0xc001a13ea0) (1) Data frame sent I0510 22:25:43.360917 6 log.go:172] (0xc001e0c9a0) (0xc001a13ea0) Stream removed, broadcasting: 1 I0510 22:25:43.360935 6 log.go:172] (0xc001e0c9a0) Go away received I0510 22:25:43.361085 6 log.go:172] (0xc001e0c9a0) (0xc001a13ea0) Stream removed, broadcasting: 1 I0510 22:25:43.361261 6 log.go:172] (0xc001e0c9a0) (0xc002320e60) Stream removed, broadcasting: 3 I0510 22:25:43.361293 6 log.go:172] (0xc001e0c9a0) (0xc0027fe000) Stream removed, broadcasting: 5 May 10 22:25:43.361: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:25:43.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6849" for this suite. • [SLOW TEST:28.435 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4269,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:25:43.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3481 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3481 STEP: creating replication controller externalsvc in namespace services-3481 I0510 22:25:43.750151 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3481, replica count: 2 I0510 22:25:46.800523 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:25:49.800760 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 10 22:25:50.447: INFO: Creating new exec pod May 10 22:25:54.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3481 execpodwkmld -- /bin/sh -x -c nslookup nodeport-service' May 10 22:25:54.770: INFO: stderr: "I0510 22:25:54.681573 4329 log.go:172] (0xc0005ac2c0) (0xc00020f540) Create stream\nI0510 22:25:54.681632 4329 log.go:172] (0xc0005ac2c0) (0xc00020f540) Stream added, broadcasting: 1\nI0510 22:25:54.684373 4329 log.go:172] (0xc0005ac2c0) Reply frame received for 1\nI0510 22:25:54.684451 4329 log.go:172] (0xc0005ac2c0) (0xc0009f4000) Create stream\nI0510 22:25:54.684476 4329 log.go:172] (0xc0005ac2c0) (0xc0009f4000) Stream added, broadcasting: 3\nI0510 22:25:54.685713 4329 log.go:172] (0xc0005ac2c0) Reply frame received for 3\nI0510 22:25:54.685750 4329 log.go:172] (0xc0005ac2c0) (0xc000665ae0) Create stream\nI0510 22:25:54.685763 4329 log.go:172] (0xc0005ac2c0) (0xc000665ae0) Stream added, broadcasting: 5\nI0510 22:25:54.686734 4329 log.go:172] (0xc0005ac2c0) Reply frame received for 5\nI0510 22:25:54.753973 4329 log.go:172] (0xc0005ac2c0) Data frame received for 5\nI0510 22:25:54.754012 4329 log.go:172] (0xc000665ae0) (5) Data frame handling\nI0510 22:25:54.754036 4329 log.go:172] (0xc000665ae0) (5) Data frame sent\n+ nslookup nodeport-service\nI0510 22:25:54.760432 4329 log.go:172] (0xc0005ac2c0) Data frame received for 3\nI0510 22:25:54.760457 4329 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0510 22:25:54.760475 4329 log.go:172] (0xc0009f4000) (3) Data frame sent\nI0510 22:25:54.761750 4329 log.go:172] (0xc0005ac2c0) Data frame received for 3\nI0510 22:25:54.761778 4329 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0510 22:25:54.761811 4329 log.go:172] (0xc0009f4000) (3) Data frame sent\nI0510 22:25:54.762309 4329 log.go:172] (0xc0005ac2c0) Data frame received for 3\nI0510 22:25:54.762334 4329 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0510 22:25:54.762359 4329 log.go:172] (0xc0005ac2c0) Data frame received for 5\nI0510 22:25:54.762384 4329 log.go:172] (0xc000665ae0) (5) Data frame handling\nI0510 22:25:54.764459 4329 log.go:172] (0xc0005ac2c0) Data frame received for 1\nI0510 22:25:54.764501 4329 log.go:172] (0xc00020f540) (1) Data frame handling\nI0510 22:25:54.764534 4329 log.go:172] (0xc00020f540) (1) Data frame sent\nI0510 22:25:54.764566 4329 log.go:172] (0xc0005ac2c0) (0xc00020f540) Stream removed, broadcasting: 1\nI0510 22:25:54.764599 4329 log.go:172] (0xc0005ac2c0) Go away received\nI0510 22:25:54.765040 4329 log.go:172] (0xc0005ac2c0) (0xc00020f540) Stream removed, broadcasting: 1\nI0510 22:25:54.765064 4329 log.go:172] (0xc0005ac2c0) (0xc0009f4000) Stream removed, broadcasting: 3\nI0510 22:25:54.765076 4329 log.go:172] (0xc0005ac2c0) (0xc000665ae0) Stream removed, broadcasting: 5\n" May 10 22:25:54.770: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3481.svc.cluster.local\tcanonical name = externalsvc.services-3481.svc.cluster.local.\nName:\texternalsvc.services-3481.svc.cluster.local\nAddress: 10.99.79.249\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3481, will wait for the garbage collector to delete the pods May 10 22:25:54.831: INFO: Deleting ReplicationController externalsvc took: 6.884901ms May 10 22:25:55.231: INFO: Terminating ReplicationController externalsvc pods took: 400.242746ms May 10 22:26:09.266: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:26:09.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3481" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.954 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":265,"skipped":4290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:26:09.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 10 22:26:10.284: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 10 22:26:12.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:26:14.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746370, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 10 22:26:17.561: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:26:17.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:26:23.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4724" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:15.394 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":266,"skipped":4322,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:26:24.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 10 22:26:24.872: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 10 22:26:24.916: INFO: Waiting for terminating namespaces to be deleted... May 10 22:26:24.939: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 10 22:26:25.223: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:26:25.223: INFO: Container kindnet-cni ready: true, restart count 0 May 10 22:26:25.223: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:26:25.223: INFO: Container kube-proxy ready: true, restart count 0 May 10 22:26:25.223: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 10 22:26:25.281: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 10 22:26:25.281: INFO: Container kube-hunter ready: false, restart count 0 May 10 22:26:25.281: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:26:25.281: INFO: Container kindnet-cni ready: true, restart count 0 May 10 22:26:25.281: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 10 22:26:25.281: INFO: Container kube-bench ready: false, restart count 0 May 10 22:26:25.281: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 10 22:26:25.281: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160dcb20d8b3bb54], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:26:26.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2366" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":267,"skipped":4324,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:26:26.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0510 22:27:07.785907 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 10 22:27:07.786: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:27:07.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-176" for this suite. • [SLOW TEST:41.312 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":268,"skipped":4339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:27:07.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-43009c05-28bf-4d00-be90-ddd5ddb12594 STEP: Creating secret with name secret-projected-all-test-volume-1de80554-a5d4-4087-b893-1b2c70e1e75b STEP: Creating a pod to test Check all projections for projected volume plugin May 10 22:27:07.867: INFO: Waiting up to 5m0s for pod "projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d" in namespace "projected-6765" to be "success or failure" May 10 22:27:07.884: INFO: Pod "projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.344283ms May 10 22:27:09.907: INFO: Pod "projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039308085s May 10 22:27:11.909: INFO: Pod "projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041930302s STEP: Saw pod success May 10 22:27:11.909: INFO: Pod "projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d" satisfied condition "success or failure" May 10 22:27:11.911: INFO: Trying to get logs from node jerma-worker pod projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d container projected-all-volume-test: STEP: delete the pod May 10 22:27:11.927: INFO: Waiting for pod projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d to disappear May 10 22:27:11.932: INFO: Pod projected-volume-9b14f767-74f6-4e5e-bcdc-1ed72e1f466d no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:27:11.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6765" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4368,"failed":0} ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:27:11.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:27:12.048: INFO: Creating deployment "test-recreate-deployment" May 10 22:27:12.052: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 10 22:27:12.094: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 10 22:27:14.627: INFO: Waiting deployment "test-recreate-deployment" to complete May 10 22:27:14.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:16.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:18.650: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 10 22:27:18.656: INFO: Updating deployment test-recreate-deployment May 10 22:27:18.656: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 10 22:27:19.684: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6513 /apis/apps/v1/namespaces/deployment-6513/deployments/test-recreate-deployment da2b34ad-3e25-45e4-b752-fe8e308a3bbf 15084952 2 2020-05-10 22:27:12 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050067e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-10 22:27:19 +0000 UTC,LastTransitionTime:2020-05-10 22:27:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-10 22:27:19 +0000 UTC,LastTransitionTime:2020-05-10 22:27:12 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 10 22:27:19.689: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-6513 /apis/apps/v1/namespaces/deployment-6513/replicasets/test-recreate-deployment-5f94c574ff e2cb0183-34f8-4523-8a89-523335094df5 15084950 1 2020-05-10 22:27:18 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment da2b34ad-3e25-45e4-b752-fe8e308a3bbf 0xc0049947a7 0xc0049947a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004994808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 10 22:27:19.689: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 10 22:27:19.690: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-6513 /apis/apps/v1/namespaces/deployment-6513/replicasets/test-recreate-deployment-799c574856 b3590a5c-1db4-417b-bc84-a263c2dd8208 15084937 2 2020-05-10 22:27:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment da2b34ad-3e25-45e4-b752-fe8e308a3bbf 0xc004994877 0xc004994878}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0049948e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 10 22:27:19.692: INFO: Pod "test-recreate-deployment-5f94c574ff-56kzc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-56kzc test-recreate-deployment-5f94c574ff- deployment-6513 /api/v1/namespaces/deployment-6513/pods/test-recreate-deployment-5f94c574ff-56kzc 2fbc29af-6180-443d-a0c8-e4d84f55b2cf 15084951 0 2020-05-10 22:27:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff e2cb0183-34f8-4523-8a89-523335094df5 0xc004994d37 0xc004994d38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9fckw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9fckw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9fckw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-10 22:27:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:27:19.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6513" for this suite. • [SLOW TEST:7.759 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":270,"skipped":4368,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:27:19.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:27:30.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3998" for this suite. • [SLOW TEST:11.159 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":271,"skipped":4379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:27:30.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 10 22:27:30.939: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 10 22:27:35.946: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 10 22:27:35.946: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 10 22:27:37.950: INFO: Creating deployment "test-rollover-deployment" May 10 22:27:37.970: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 10 22:27:39.977: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 10 22:27:39.983: INFO: Ensure that both replica sets have 1 created replica May 10 22:27:39.989: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 10 22:27:39.995: INFO: Updating deployment test-rollover-deployment May 10 22:27:39.995: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 10 22:27:42.169: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 10 22:27:42.321: INFO: Make sure deployment "test-rollover-deployment" is complete May 10 22:27:42.329: INFO: all replica sets need to contain the pod-template-hash label May 10 22:27:42.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746460, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:44.338: INFO: all replica sets need to contain the pod-template-hash label May 10 22:27:44.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:46.337: INFO: all replica sets need to contain the pod-template-hash label May 10 22:27:46.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:48.338: INFO: all replica sets need to contain the pod-template-hash label May 10 22:27:48.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:50.348: INFO: all replica sets need to contain the pod-template-hash label May 10 22:27:50.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:52.338: INFO: all replica sets need to contain the pod-template-hash label May 10 22:27:52.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724746458, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 10 22:27:54.346: INFO: May 10 22:27:54.346: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 10 22:27:54.354: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5076 /apis/apps/v1/namespaces/deployment-5076/deployments/test-rollover-deployment 7fcb28f1-15c9-4f49-be82-73bb2f78c8ae 15085170 2 2020-05-10 22:27:37 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002afd6f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-10 22:27:38 +0000 UTC,LastTransitionTime:2020-05-10 22:27:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-10 22:27:53 +0000 UTC,LastTransitionTime:2020-05-10 22:27:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 10 22:27:54.393: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5076 /apis/apps/v1/namespaces/deployment-5076/replicasets/test-rollover-deployment-574d6dfbff 913ba856-2adc-499e-88e2-5ec3c3d71dff 15085159 2 2020-05-10 22:27:39 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7fcb28f1-15c9-4f49-be82-73bb2f78c8ae 0xc005006077 0xc005006078}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050060e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 10 22:27:54.393: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 10 22:27:54.393: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5076 /apis/apps/v1/namespaces/deployment-5076/replicasets/test-rollover-controller 9f0406a5-e63a-4785-95a9-5dbf23d0e034 15085169 2 2020-05-10 22:27:30 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7fcb28f1-15c9-4f49-be82-73bb2f78c8ae 0xc002afdf97 0xc002afdf98}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005006008 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 10 22:27:54.393: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5076 /apis/apps/v1/namespaces/deployment-5076/replicasets/test-rollover-deployment-f6c94f66c 2b3f0463-7791-4018-ad68-c4d3d9cff984 15085108 2 2020-05-10 22:27:37 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7fcb28f1-15c9-4f49-be82-73bb2f78c8ae 0xc005006150 0xc005006151}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050061d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 10 22:27:54.398: INFO: Pod "test-rollover-deployment-574d6dfbff-7qln4" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-7qln4 test-rollover-deployment-574d6dfbff- deployment-5076 /api/v1/namespaces/deployment-5076/pods/test-rollover-deployment-574d6dfbff-7qln4 fd9c375d-c908-4ac7-a8cd-a6ce4fa7485c 15085127 0 2020-05-10 22:27:40 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 913ba856-2adc-499e-88e2-5ec3c3d71dff 0xc005006707 0xc005006708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nnz6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nnz6g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nnz6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-10 22:27:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.48,StartTime:2020-05-10 22:27:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-10 22:27:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://43aafa2c5ab5a14a9e9d785987985d786dd9382cbf98c884cdd865ea47612247,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:27:54.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5076" for this suite. • [SLOW TEST:23.554 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":272,"skipped":4425,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:27:54.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:27:54.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8799" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4428,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:27:54.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 10 22:27:54.948: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 10 22:27:54.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6991' May 10 22:27:55.331: INFO: stderr: "" May 10 22:27:55.331: INFO: stdout: "service/agnhost-slave created\n" May 10 22:27:55.332: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 10 22:27:55.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6991' May 10 22:27:55.620: INFO: stderr: "" May 10 22:27:55.620: INFO: stdout: "service/agnhost-master created\n" May 10 22:27:55.620: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 10 22:27:55.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6991' May 10 22:27:55.966: INFO: stderr: "" May 10 22:27:55.966: INFO: stdout: "service/frontend created\n" May 10 22:27:55.967: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 10 22:27:55.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6991' May 10 22:27:56.208: INFO: stderr: "" May 10 22:27:56.208: INFO: stdout: "deployment.apps/frontend created\n" May 10 22:27:56.208: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 10 22:27:56.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6991' May 10 22:27:56.902: INFO: stderr: "" May 10 22:27:56.902: INFO: stdout: "deployment.apps/agnhost-master created\n" May 10 22:27:56.902: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 10 22:27:56.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6991' May 10 22:27:57.536: INFO: stderr: "" May 10 22:27:57.536: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 10 22:27:57.536: INFO: Waiting for all frontend pods to be Running. May 10 22:28:07.587: INFO: Waiting for frontend to serve content. May 10 22:28:07.597: INFO: Trying to add a new entry to the guestbook. May 10 22:28:07.607: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 10 22:28:07.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6991' May 10 22:28:07.759: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:28:07.759: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 10 22:28:07.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6991' May 10 22:28:08.630: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:28:08.630: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 10 22:28:08.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6991' May 10 22:28:08.832: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:28:08.832: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 10 22:28:08.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6991' May 10 22:28:09.082: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:28:09.082: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 10 22:28:09.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6991' May 10 22:28:09.264: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:28:09.264: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 10 22:28:09.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6991' May 10 22:28:09.482: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 10 22:28:09.482: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:28:09.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6991" for this suite. • [SLOW TEST:15.119 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":274,"skipped":4435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:28:09.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 10 22:28:10.693: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:28:26.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7584" for this suite. • [SLOW TEST:16.825 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":275,"skipped":4481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:28:26.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 10 22:28:26.826: INFO: Waiting up to 5m0s for pod "downward-api-895d9bce-a51f-4632-bbb5-9fc357146246" in namespace "downward-api-9731" to be "success or failure" May 10 22:28:26.835: INFO: Pod "downward-api-895d9bce-a51f-4632-bbb5-9fc357146246": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543008ms May 10 22:28:28.839: INFO: Pod "downward-api-895d9bce-a51f-4632-bbb5-9fc357146246": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012453873s May 10 22:28:30.843: INFO: Pod "downward-api-895d9bce-a51f-4632-bbb5-9fc357146246": Phase="Running", Reason="", readiness=true. Elapsed: 4.016294876s May 10 22:28:32.847: INFO: Pod "downward-api-895d9bce-a51f-4632-bbb5-9fc357146246": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020710783s STEP: Saw pod success May 10 22:28:32.847: INFO: Pod "downward-api-895d9bce-a51f-4632-bbb5-9fc357146246" satisfied condition "success or failure" May 10 22:28:32.850: INFO: Trying to get logs from node jerma-worker pod downward-api-895d9bce-a51f-4632-bbb5-9fc357146246 container dapi-container: STEP: delete the pod May 10 22:28:32.865: INFO: Waiting for pod downward-api-895d9bce-a51f-4632-bbb5-9fc357146246 to disappear May 10 22:28:32.870: INFO: Pod downward-api-895d9bce-a51f-4632-bbb5-9fc357146246 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:28:32.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9731" for this suite. • [SLOW TEST:6.136 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4509,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:28:32.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1917 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1917 STEP: creating replication controller externalsvc in namespace services-1917 I0510 22:28:33.135915 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1917, replica count: 2 I0510 22:28:36.186339 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0510 22:28:39.186594 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 10 22:28:39.259: INFO: Creating new exec pod May 10 22:28:45.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1917 execpodtf9fs -- /bin/sh -x -c nslookup clusterip-service' May 10 22:28:45.690: INFO: stderr: "I0510 22:28:45.615799 4593 log.go:172] (0xc000aec2c0) (0xc00057e820) Create stream\nI0510 22:28:45.615862 4593 log.go:172] (0xc000aec2c0) (0xc00057e820) Stream added, broadcasting: 1\nI0510 22:28:45.620590 4593 log.go:172] (0xc000aec2c0) Reply frame received for 1\nI0510 22:28:45.620636 4593 log.go:172] (0xc000aec2c0) (0xc00023f5e0) Create stream\nI0510 22:28:45.620650 4593 log.go:172] (0xc000aec2c0) (0xc00023f5e0) Stream added, broadcasting: 3\nI0510 22:28:45.621896 4593 log.go:172] (0xc000aec2c0) Reply frame received for 3\nI0510 22:28:45.621936 4593 log.go:172] (0xc000aec2c0) (0xc000b1c000) Create stream\nI0510 22:28:45.621952 4593 log.go:172] (0xc000aec2c0) (0xc000b1c000) Stream added, broadcasting: 5\nI0510 22:28:45.622813 4593 log.go:172] (0xc000aec2c0) Reply frame received for 5\nI0510 22:28:45.673659 4593 log.go:172] (0xc000aec2c0) Data frame received for 5\nI0510 22:28:45.673686 4593 log.go:172] (0xc000b1c000) (5) Data frame handling\nI0510 22:28:45.673702 4593 log.go:172] (0xc000b1c000) (5) Data frame sent\n+ nslookup clusterip-service\nI0510 22:28:45.680401 4593 log.go:172] (0xc000aec2c0) Data frame received for 3\nI0510 22:28:45.680421 4593 log.go:172] (0xc00023f5e0) (3) Data frame handling\nI0510 22:28:45.680435 4593 log.go:172] (0xc00023f5e0) (3) Data frame sent\nI0510 22:28:45.681790 4593 log.go:172] (0xc000aec2c0) Data frame received for 3\nI0510 22:28:45.681807 4593 log.go:172] (0xc00023f5e0) (3) Data frame handling\nI0510 22:28:45.681815 4593 log.go:172] (0xc00023f5e0) (3) Data frame sent\nI0510 22:28:45.682663 4593 log.go:172] (0xc000aec2c0) Data frame received for 5\nI0510 22:28:45.682700 4593 log.go:172] (0xc000b1c000) (5) Data frame handling\nI0510 22:28:45.682732 4593 log.go:172] (0xc000aec2c0) Data frame received for 3\nI0510 22:28:45.682748 4593 log.go:172] (0xc00023f5e0) (3) Data frame handling\nI0510 22:28:45.684228 4593 log.go:172] (0xc000aec2c0) Data frame received for 1\nI0510 22:28:45.684246 4593 log.go:172] (0xc00057e820) (1) Data frame handling\nI0510 22:28:45.684267 4593 log.go:172] (0xc00057e820) (1) Data frame sent\nI0510 22:28:45.684476 4593 log.go:172] (0xc000aec2c0) (0xc00057e820) Stream removed, broadcasting: 1\nI0510 22:28:45.684689 4593 log.go:172] (0xc000aec2c0) Go away received\nI0510 22:28:45.684830 4593 log.go:172] (0xc000aec2c0) (0xc00057e820) Stream removed, broadcasting: 1\nI0510 22:28:45.684846 4593 log.go:172] (0xc000aec2c0) (0xc00023f5e0) Stream removed, broadcasting: 3\nI0510 22:28:45.684855 4593 log.go:172] (0xc000aec2c0) (0xc000b1c000) Stream removed, broadcasting: 5\n" May 10 22:28:45.690: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1917.svc.cluster.local\tcanonical name = externalsvc.services-1917.svc.cluster.local.\nName:\texternalsvc.services-1917.svc.cluster.local\nAddress: 10.110.14.186\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1917, will wait for the garbage collector to delete the pods May 10 22:28:45.846: INFO: Deleting ReplicationController externalsvc took: 7.420563ms May 10 22:28:46.146: INFO: Terminating ReplicationController externalsvc pods took: 300.373243ms May 10 22:28:59.348: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:28:59.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1917" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:26.541 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":277,"skipped":4511,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 10 22:28:59.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 10 22:28:59.468: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 10 22:29:14.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6425" for this suite. • [SLOW TEST:15.265 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":278,"skipped":4511,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 10 22:29:14.683: INFO: Running AfterSuite actions on all nodes May 10 22:29:14.683: INFO: Running AfterSuite actions on node 1 May 10 22:29:14.683: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4822.641 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS